cancel
Showing results for 
Search instead for 
Did you mean: 

Has anybody else encountered corrupted HTTPS streams lately?

Associate Professor

Re: Has anybody else encountered corrupted HTTPS streams lately?

Well, giant hell storm doesn't typically take the gateway out long enough to warrant a failover to a backup gateway... I assume the estimated duration for outage would have to extend for days, or longer before Hughes would consider rerouting 4 beams worth of users to another gateway.
New Member

Re: Has anybody else encountered corrupted HTTPS streams lately?

LOL! Love it!! Kiddies are Carrier Petri Dishes!!  Yep--Especially if they are age 5 years; Well--"That" was the excuse used for the H1N1 flu virus. (A 5 YO Mexican boy who apparently dug up & played with a 1918 flu victim; then, ever though he NEVER left his Country--he somehow spread it all over the North American continent!)  And, oh yeah--ALL the subsequent mutations of that little "buggy-boo" cases have been tracked to 2 little 5 year olds--1 in PA & the other in IL! (Again, they managed to spread it Worldwide!)  Real "Funny"--& I'm On The NFL!  IMO we should Quarantine ALL 5 Year Olds!!!

Remembering the classes I use to teach & sure enough: 1 kid would come in sick--in turn it just spread. Round & Round it goes, where it stops nobody knows! (Though there ARE more than a few Politicians & Fed Agents I'd LOVE To Send It Too!) LOL!

New Member

Re: Has anybody else encountered corrupted HTTPS streams lately?

I've been examining the packet captures of the failed HTTP downloads more carefully...what's very curious is that there are packets showing up on the client side that did not come from the server, and vice versa. What's more, the TCP headers on packets sent in both directions do not match, and the TCP checksum for each have been updated to reflect the changes. 

This is suspicious because most network devices (routers, switches) have no business touching TCP information, which is an endpoint-to-endpoint transport layer protocol. It's almost as if there is some device between my router and the server that is transparently interfering (probably from misconfiguration/overload) with the download.

I just got a clean capture of a TCP session on a non-HTTP port over IPv6 at three points: on the client computer, between my modem and the router (using a hub as a tap), and at the server. Even looking only at the initial SYN packet, I can see that before and after my router, the packets are identical down to the TCP checksum. (Since it's an IPv6 connection, there's no need to mangle the packet with NAT.) Between my router and the server, however, something is altering my packet on the fly.

I sent a 94-byte SYN packet with the following characteristics:
Window size: 28800
Options:
MSS 1440
TCP SACK permitted
Timestamps
NOP
Window Scale 7 (128)
TCP Checksum: 0xa3ec

What did the server receive? An 84-byte SYN packet:
Window Size: 16384
MSS 1440
NOP
NOP
TCP SACK permitted
NOP
Window Scale 3 (8)
TCP Checksum: 0x849a

(Yes, they're the "same" packets. I sent no other data on that port during the capture. Yes, Web Acceleration is off, and besides, this was not an HTTP session, nor was it on port 80.)

Is there some additional NAT or proxy device in action besides the Web Acceleration system, somewhere within the byzantine networks of HughesNet's ground stations? Is this "TCP Acceleration", a feature mentioned briefly in status reports deep within the HT1000's control center? 

On the HTTP downloads, the strangest part is that the client receives several retransmissions of a packet it has acknowledged (shortly before it disconnects by sending a RST to the server), but having access to the server captures of that session, the server sent that packet only once. In addition, there are many packets sent by the server that do not arrive at the client at all. Therefore, it seems likely that something between me and the server is munging my packets and failing badly at it, as well as trying to act as an endpoint--hence the problems I and others have been having downloading large files. 

The fact that I am not the only one having problems downloading these files seems to indicate that the problem lies not with my modem, but with something in HughesNet's network architecture. I think HughesNet's engineers need to look into this problem, because something is clearly broken, and this broken something is wasting your customers' bandwidth by forcing us to re-download files--which by extension, is putting a drain on the entire system.

Packet captures will be uploaded tomorrow after I sanitize them.
New Member

Re: Has anybody else encountered corrupted HTTPS streams lately?

I'm still having trouble downloading certain large files. One of the gems required to install Ruby on Rails fails around 50% each time I try to download it. The url is https://rubygems.org/downloads/nokogiri-1.6.6.2.gem

Would it be helpful to build up a list of files that I (and probably others) cannot download reliably?

This is particularly frustrating because my HughesNet connection, contrary to the experience of some of my fellow customers, has actually been very stable. On occasion, I've downloaded multi-gigabyte OS updates without issue (at night, of course). Ironic that the problematic files are all less than 10MB or so.

Speaking of bandwidth, I've used several hundred megabytes over the past week to debug this issue. (I'm usually very sparing of my bandwidth.) Would it be possible to get a token for this? I do share this connection...
New Member

Re: Has anybody else encountered corrupted HTTPS streams lately?

Sorry, that should be 86 bytes, not 84.
Assistant Professor

Re: Has anybody else encountered corrupted HTTPS streams lately?

Downloaded fine for me with Windows using Chrome. Wonder if a Linux + Hughes issue.
Alum

Re: Has anybody else encountered corrupted HTTPS streams lately?

Hi Jezra,

I was just thinking of this. Jackson, have you tried toggling web acceleration back on? I was reading back through the thread and I remembered you had said you always have web acceleration toggled off. If you can test this that would be great.

Thanks,
Chris
New Member

Re: Has anybody else encountered corrupted HTTPS streams lately?

Hey Chris,

50MB of bandwidth later Smiley Sad I can report that the problem persists with Web Acceleration on. The Nokogiri gem (https://rubygems.org/downloads/nokogiri-1.6.6.2.gem) fails to download on the following systems:

Internet Explorer on Windows 8.1
Firefox on Windows 8.1
Chrome on Windows 8.1
Internet Explorer on Windows 7
Firefox on Ubuntu 14.04LTS
Chrome on Ubuntu 14.04LTS
wget on Ubuntu 14.04LTS
wget on Ubuntu 12.04LTS

I think we can conclude that A) This is a cross-platform, cross-browser issue and B) Web Acceleration does not affect it*.

*Web Acceleration *may* make the problem go away for plain ol' unencrypted HTTP traffic. I haven't tested it yet because I don't have a HTTP url that predictably fails to download and I'm not using more bandwidth right now. But your Web Acceleration system is not capable of handling HTTPS because it is encrypted end-to-end, so it does not affect the failing downloads. (If it was capable, I'd be very concerned because that would mean you would be sslstrip-ing our encrypted traffic, which is rude, to say the least, and a legal landmine if you don't get your customers' permission first or notify them.)

In addition, the fact that non-HTTP/HTTPS ports (such as 8888) are being manipulated at the TCP level suggests another system is to blame. Web Acceleration "fixing" the issue (if it does) for HTTP does not "fix" the issue for TCP protocols that use the other 655354 ports.

I wonder if UDP is affected too?
New Member

Re: Has anybody else encountered corrupted HTTPS streams lately?

I did verify Web Acceleration was actually working by checking to see that it was inserting the JavaScript timing/fallover code into every HTTP request. It was.
New Member

Re: Has anybody else encountered corrupted HTTPS streams lately?

Tried it with Mint 17.2 got this error right off the bat (tmp/ltkeKIo1.tar.part could not be saved, because the source file could not be read.) 

I have to go with sgoshe I do believe it is a Hughes compatibility error on some sites.  I haven't had problems on a lot of https sites I've been to but these seem to be the problem child.  I tried it with my old XP Machine and all I got was encryption code with IE and the same error message with Firefox.  Midori under Linux thought it worked but the file was empty.