6 posts • joined 9 Aug 2007
@Austin: revenue types
I think you want an "of" somewhere in: "....a large portion revenue." :-)
While AMD and Intel have gone back and forth in terms of CPU speeds and performance, I've preferred AMD's HyperTransport with the dedicated memory controller on the CPU itself, rather than requiring the NorthBridge to deal with all main memory access. Now that Intel is switching over to the same type of design, one of the main advantages of AMD's architecture is now moot.
I don't really care about Intel's Hyperthreading support, as a 4-core CPU running 1 thread apiece is still going to potentially quite busy in terms of memory access and so forth, and relatively few apps out there are going to show a difference between 4 and 8 threads. On the other hand, the ability to bump up the clockspeeds by a notch or two when the system is mostly running a single thread strikes me as a nifty idea.
Comcast abusing network standards...
"By contrast, most Internet traffic moving upstream on residential broadband networks comes from applications with no more than one stream active at a time."
Actually, anyone who has more than one mail account setup in things like Outlook, Thunderbird, Mail.app will end up with multiple streams of traffic going, and most modern browsers enable a certain degree of concurrency. In Firefox, for example, network.http.max-connections is 24, although it will only open up to 8 per webserver.
While your list of suggestions is a reasonable starting point, it's hardly the case that Comcast's practices live up to them. Let's go over some details:
"Does the practice support a rational goal, such as the fair distribution of bandwidth?"
This is a fine goal. Fair distribution of network bandwidth could be readily implemented by having a queue per IP address and pulling packets from each active queue in a WFQ/WF2Q fashion.
"Is it applied, adapted, or modified by network conditions?"
This is a bit too vague to become a useful criteria.
"Does it conform to standard Internet practices, or to national or international standards, and if not, does it improve on them?"
Forging reset packets is obviously a violation of standard Internet practices and the denial-of-service aspect is quite arguably criminal.
"Has it been communicated to customers?"
As Cade Metz eloquently said, "Eight months after an independent researcher revealed that Comcast was secretly throttling BitTorrent and other P2P traffic, the beleaguered American ISP has at last admitted that's exactly what it's doing."
"Has technical information that would allow for independent analysis been made available to the research community and the public at large?"
Indeed. Comcast's customers ought to be notified of just how much overselling of the bandwidth they have paid for is going on, and just what the average available bandwidth is for the various service levels actually are.
"Does the practice interfere with customer control of traffic priorities or parameters consistent with terms of service?"
If you end up breaking functionality for people using Lotus Notes, or, for that matter, seeding things via BitTorrent, you've interfered with customer control over the network-using applications which they have chosen to run.
"Is the practice efficient with respect to both the upstream and downstream data paths?"
Indeed. BitTorrent is remarkably effective at utilizing the available bandwidth and copes much better than systems like HTTP or FTP where the server side often constitutes a central point of failure.
"Does the practice accomplish its purpose with minimal disruption to the network experience of customers as a whole?"
See above with regard to breaking users of Lotus Notes. Forging reset packets obviously constitutes a complete disruption to that particular network connection-- that hardly qualifies as "minimal disruption".
I welcome unbiased expert opinions, but being paid to hold an opinion does not qualify, Mr. Bennett.
There is a long-standing principle documented in RFC-793 that TCP traffic is an end-to-end connection; having a third party forge reset packets (aka "Reset Spoofing") in order to disrupt network traffic is widely and correctly regarded as a malicious form of denial-of-service attack.
With regard to the claim that "BitTorrent strives for a symmetric interchange of data, offering as much (or slightly more) in the upload direction as in the download direction."-- the BitTorrent clients I am familiar with, such as Azureus and uTorrent, default to limiting the upstream bandwidth to about 10% of the user's total upstream bandwidth, in order to avoid significant congestion of other outbound network traffic. Of course, that can be adjusted to suit the user's preferences.
With regard to the notion that the only choices for managing network bandwidth are random packet drop or reset spoofing, Mr. Bennet seems to be unaware of techniques used in many firewalls and freely available in Linux and BSD operating systems as part of IPFW or PF+ALTQ which include Weighted Fair Queuing (WFQ), WF2Q+ (http://redriver.cmcl.cs.cmu.edu/~hzhang-ftp/TON-97-Oct.pdf), and other variants of hierarchical packet scheduling found in ALTQ.
Lots of people use these today to prioritize VOIP, ICMP and DNS traffic over FTP or peer-to-peer traffic, and these QoS mechanisms scale up to at least T3/OC3 bandwidth on consumer-grade (CPE) Cisco routers or P2-grade Intel boxes, and these mechanisms do not involve forging traffic or even slowing down lower-priority traffic if the network is not being utilized for higher priority traffic.
Jeremy, yes, Russ is the chairman responsible for license approval. As to how you get elected/appointed to the OSI board, all of this stuff, including the bylaws, minutes, and so forth are all posted on the OSI website, for example see Article V here:
As for our esteemed El Reg journalist, well, Ashlee, to the extent that you write about legitimate problems or issues, then whatever take you might have, from helpful suggestions for improvement to blunt criticism or even panning the whole notion of the OSI, is fair game.
However, if you choose to write silly nonsense such as comments like "...tapped to lead during a bizarre hazing ritual performed at midnight in the San Diego Zoo's penguin display", well, at best you manage to weaken the legitimate points you have raised. At worst, you might find that people actually do criticize journalists for not being able to make their points without making stuff up to use for a strawman argument....
Too much sugar in El Reg's Koolaid...
I think Ashlee added a bit too much sugar to the Koolaid she must have drank before writing this. The most prominent backer of the GPLv3 pretty obviously is the Free Software Foundation, not SugarCRM.
Let's put it this way: I've never used SugarCRM, but I use gnutar everyday, which just converted to GPLv3 with v1.18, and I expect that GCC will move to v3 as well with their next release after the current 4.2.1 version, and much of the rest of the GNU toolchain and utilities will follow suit.
Please note that I don't have anything against SugarCRM, nor do I hold a strong opinion about SugarCRM going with the GPLv3, rather than the GPLv2 or some other license, if they please-- certainly either version of the GPL is better for their users and for developers who might want to use some of their source code than a badgeware license would be.
Moving on, what matters most is whether people who use software end up with something which suits their needs, or can be changed or modified if need be. The Open Source Definition and the OSI approval process are one means to promote better software which comes with the source code and a license which permits changes to be made, shared, and redistributed to others. The FSF and their four freedoms are another good approach, and one which has resulted in a lot of good software, but some people don't care for the way the FSF devalues non-GPLed open source software under permissive licenses, such as BSD or MIT-licensed projects, because such can be used to create closed/proprietary software.
The main difference between the SugarCRM license and SocialText's CPAL is that the latter made some simple but vital changes as a result of the OSI review process. Specifically, to their clause 14(a) about attribution to state "...a prominent display of the Original Developer's Attribution Notice (as defined below) must occur on the graphic user interface (which may include display on a splash screen), if any. If the Executable and Source Code does not launch or run a graphic user interface, this obligation shall not apply."
If SugarCRM was willing to make the same change, they would likely have been approved as well. Without that flexibility, a developer cannot reuse SugarCRM's code to write anything which runs without a GUI capable of displaying their badge. Now, the criticism of the OSI's disorganization with regard to license submission has some merit, but IMHO John Robert's approach to getting OSI approval for his license left a lot to be desired, too.