Just asking, but ...
Did anyone get a stern telling-off for illegally punishing Bradley Manning?
2601 posts • joined 19 Oct 2007
Years ago, Apple sued Microsoft for copying Apple's look and feel. Microsoft came back with some case law showing that look and feel were not protected. The case in question was when Xerox Parc sued Apple for copying exactly the same look and feel. Decades later, Apple copy Etch-a-Sketch and sue Samsung for copying Etch-a-Sketch.
I thought Judge Birss had squashed this already. When Apple sued Samsung in the UK, it made the UK court the European community court for this case. The idea was to prevent the same case being repeated in every member state. Apple got a spanking for raising the same issues in Germany and were required to run a series of adverts quoting the ruling that Samsung did not copy Apple.
Installed base at end of Q1 2013:
1 Android _______ 823M _ 57%
2 iOS __________ 277M _ 19%
3 Symbian ______ 165M _ 11%
4 Blackberry _____ 97M __ 8%
5 Windows Phone _ 28M __ 2%
6 bada __________ 27M __ 2 %
Oops - they missed. Windows is not the 'third ecosystem'. It is 5th. Elop set fire to the Symbian platform so Windows Phone might gain one position. Blackberry could mess up their come-back and drop out of the running. Windows might get into fourth place for a short time, but then will be overtaken by Tizen and at least one out of Sailfish, Ubuntu and Firefox.
Have you got any evidence for 'Nokia's resurgence'? I know they sold the most expensive smart phone in the world - their head office. They cannot bump their smart phone revenue the same way again.
I agree that focussing on the business market is a one way trip to legacy land. The thing is, they do not have many choices. Microsoft depend on manufacturers installing Windows on everything, but they are burning their relationships with manufacturers. Patent extortion, copying manufacturers unique selling points into surface and crowing about 100M licenses shipped (to manufacturers who cannot sell them on) are not the best ways to keep essential business partners happy. In a year or two, Windows on a new computer will be an expensive add-on feature - not bundled whether you want it or not.
I have known for years that Microsoft Linux was inevitable. I did not expect to see it so soon:
The next big milestone to look out for is Microsoft Office for Android.
You may not see anything wrong with Surface, but millions of other people do. We know this because they bought ABM.
I don't have an iPad, but it one fell out of a packet of corn flakes then installing Linux would make it a useful device for me.
The bundled MS Office license is the killer app for RT. Without that license, the cost of an RT tablet could get low enough that I would bother to do a web search for installing Linux on it.
The law used to be that if you got caught using unlicensed software then you had to buy a license. The problem was that gave no incentive to buy because you could either pay up front, or not pay and risk having to pay the same amount if you got caught. Plan B was to prevent you from using the software ever again:
Plan C now includes a hefty fine. The fun bit is part of the license says they can raid your business to do a software audit. If you do not have any software protected by FAST, they cannot just raid you at your expense. They have to buy some sort of warrant from a judge, and do the raid at their expense. Make certain there is no trace of any commercial software before you try that.
I still have the licence certificate for a copy of MS Office 95. I cannot install it because the receipt has faded completely (I am not sure if it runs under WINE either). FAST will not accept a license certificate unless you have a receipt to prove you bought it yourself. Back in the day there was an excellent cartoon:
Policeman: 'What to you mean you have been burgled? All the computers are still here.'
Manager: 'They stole the licenses.'
If I download one copy of MS Office, I am supposed to get £28 from somewhere. If we follow the 'logic', by downloading 100 copies, £2800 magically appears in my wallet. Perhaps they mean that can sell pirate copies MS Office for £35, and the the cost of the sale is £7. This all seems a bit pointless as I could charge £35 for installing Libre Office without breaking the law and my customers would be better off. I think the real flaw here is assuming there is anything sensible going on here. Surely if a customer thinks he needs MS Office, a competent system builder should be able to sell him Libre Office Professional for £350, or the enterprise version for £3500.
A speed index would not push the market. At home, the family have three tasks that are too demanding for a Raspberry Pi. Those three tasks* are handled by an ancient x86 with an SSD, but could be handled by a more modern ARM. Although there are a few people who require an 8 core CPU for work, for the vast majority, an old PC is fine, and a cheap SSD upgrade will be far more effective than a CPU+Motherboard+memory upgrade with or without a noisy graphics card. The message most people are looking for is 'cheap and silent' not 'even faster than your existing PC that is already more than fast enough'. There is already an excellent 'power index': fanless. Intel and AMD have recently worked that out. They are trying to join the market, but on their terms.
The mobile market is competitive. You can buy only the features you want. The laptop market is segmented: if you want any one good feature you must pay for an ultra book with many expensive features. I think the existing players will cling to segmentation for the rest of the year. The feature I have always wanted was a rugged modular laptop I could upgrade like a desktop. By the end of the year new phones will be USB3. Bolt a screen, battery and charger into a briefcase along with a phone and I get my rugged laptop. If I want a faster CPU, I will not have to replace the expensive components - the LCD, throw away Windows licence and the battery.
The gnu from the FSF logo has an excellent beard, but beards are not required to use open source software. Tux does just fine without one. Wilbur holds a paint brush in his mouth, so is better off without a beard. I thought Linus proved this beard requirement stuff was a myth in 2009:
I am sure HM Revenue and Customs have skilled investigators who understand the tax system and how to get evidence out of a multinational corporation. MPs publicly grilling a Google representative is just pointless theatre.
As it is, the government just promised to throw at least £290million down the drain on another broken NHS IT system. Imagine a compromise - Google continue to pay almost no tax, but provide a free computerised prescription system with full analytics and targeted ads. The sad thing is I have confidence in our government's ability to find an alternative that is worse than either plan.
142 Farads per gram excellent - combine that with the 12 Wh per kilo and we get a maximum voltage of ... 0.78V.
The other nasties are life time and working temperature. Working temperature is usually specified for a life time of 1000 hours (42 days). Life time doubles if you half the working voltage (divide energy stored by 4), or over specify the temperature by 10 Centigrade. We need temperature and life time specs to compare these to existing devices.
... counting distro downloads would not be my tenth choice.
Here are 462,000,000 Android sales in 2012 with a single link:
Windows (all flavours) only scored 296,000,000 that year. Android is not the only flavour of Linux. If we start adding more Linux flavours, that will include markets like supercomputers and routers where Windows is almost absent. Penguins do not need to bulk up installation numbers. The tables have turned:
Linux Windows 8 not yet ready for the desktop.
An interesting report would contain:
Percentage of downloads promptly deleted because the downloader never wants to hear that again.
Percentage of downloads that resulted in a sale.
Percentage of downloads used for format shifting previously bought material.
Percentage of downloads that are of material that is not for sale.
Percentage of downloads distributed for free by the author (as in advertising for the T-shirts).
Percentage of downloads that are from someone who listens to the music, could pay for it but doesn't want to.
I like A.N Other's proposal for publishing the percentage that goes to the musician/artist/author. After all, where does the money come from to buy these laws, studies and articles in The Register?
(Personal choice: I buy DVD's when the price becomes reasonable.)
If he goes for 'Microsoft will save us', he will find that the cost of a TIFKAM license rises until he gets only 0.01% profit. On the other hand, he can select components with good open source drivers and ship with Freedos to avoid the patent trolls. Retailers would love it because they could charge for installing Android. The Penguins would love it because they could install their choice of OS. Microsoft would go berk because TIFKAM would have to compete and actually do what users want for a fair price.
Not that long ago, companies shipping from the channel islands could avoid some VAT and pass on some saving to customers. Companies that were not set up to ship from the channel island whined that this was unfair, and got the law changed. Before the law changed, many UK customers were avoiding tax by buying from play.com and other tax efficient mail order retailers.
I am a tax avoider. So is almost everyone else. You are welcome to call the pot black, but have you looked carefully in the mirror? Unless you made a real effort to select opportunities to pay tax, chances are that you are a kettle.
Low sales of Windows 8 because of the recession:
Low sales of Windows 8 because of manufacturers:
Low sales of Windows 8 because of tablets:
Low sales of Windows 8 because of smart phones:
Low sales of Windows 8 because people do not like the user interface:
There will be a new excuse next month, and the month after.
About 2000 years ago, Aulus Cornelius Celsus gathered together literature about medicine from the authorities in the field. Some of the treatments in his book 'De Medicina' are beneficial, some are survivable, and others are just plain silly. At the time, there was no commonly accepted way to tell the difference. There was no need to because 'the authorities in the subject knew better' and 'some knowledge is lost as it is passed down through the generations, so your ancestors must have known far more than you ever could'. Hippocrates and Galen relied on the authorities and came up with balancing the four humours and blood letting.
About 500 years ago, Philippus Aureolus Theophrastus Bombastus von Hohenheim (aka Paracelus) had the radical idea of learning by observing nature rather than blindly relying on texts written by the authorities. His medical theories were bat-shit crazy, but the idea that modern generations could know more than the ancients was a huge step in the right direction. Paracelus's medicine involved prescribing herbs and minerals and looking to see this did any good. (Double blind trials and proper statistical tests came much later.)
Please read up on the scientific method (http://en.wikipedia.org/wiki/Scientific_method), especially the bits about replication, external review, data recording and sharing. Without those things, you end up with balancing the humours, huge wind farm subsidies, carbon emissions trading and blood letting. If you still insist on trusting authorities, I recommend a tried and tested cure: leeches applied 50 at a time.
Is Eadon a pre-teen penguin, a Microsoft shill trying to give penguins a bad name or just a troll. When I see his name, I just skip past the comment and any replies. If Eadon gets banned we will just see equally annoying posts from Eadori . Perhaps one day he will grow up, but in the mean time, there is no need to feed him any attention.
I think the most common reason for reading the report will be companies trying to find the most effective tax avoider so they can copy a known good system. Most people won't bother to read a government report. Few would believe it was accurate anyway. Lets see if anyone claims they will make purchase decisions based on the government supplied tax avoidance data.
Lets start with some data stored on a disk, and increase the size until problems get in the way. Step one, buy a bigger disk. When that doesn't work, split the data between two disks. Pretend you are locked into software that won't work like that. Plan B is to make a big virtual disk out of many small disks. Plan B is a disaster waiting to happen. When any disk fails, you get file system corruption and lose all your data.
The old solution was RAID 5: Add one more disk than you need. If you now have N disks, reserve 1/Nth of each disk for parity data. For each sector of parity data, assign one sector of real data from each of the other drives. Start with each parity sector set to the exclusive or of all the corresponding data sectors. Before every write, read the sector that is about to be overwritten, write the new data. XOR the old data, the new data and the data from the corresponding parity sector and store the result back on the parity sector. This preserves the fact that each parity sector is the exclusive or of the corresponding data sectors. If one drive breaks, the array still works. When you need to read a sector on the broken drive, you read the sector from each drive that shares the same parity sector as the missing sector. The exclusive or of all those sectors is the data you wanted. Pull out the broken drive, plug in a new one and you can use that trick to restore all the data you cannot read directly from the broken drive.
RAID 5 has issues. That read before every write costs performance. When a drive fails, reading all the other drives to get the missing data hits performance. When you replace the broken drive, the array thrashes hard and you loose performance. The subtle disaster is if a read fails, and the error detection algorithm doesn't notice. (If you have a large busy array, this will bite you). The miss-read sector will cause the parity sector to contain garbage. When the next disk fails (when - not if - as you have so many disks) the garbage in the parity sector will cause data corruption to the corresponding sector on the broken disk. RAID 5 is almost always the wrong answer. Instead double the number of disk drives and store each sector on two drives. Disks are cheap, and mirroring does does not impose the large performance penalties that come with RAID 5.
Now lets move into the IBM universe. In this universe, error detection is perfect, so the file system corruption risk of RAID 5 disappears. The poor write performance of RAID 5 does not matter to you, and you do not mind you system slowing to a craw each time a disk fails. You do care about the price of disks, so you are too cheap to go for mirroring. Lastly, you buy your drives from which ever manufacturer is currently going through a bad patch (this happens to all of them - just watch the commentards slagging off the particular manufacture that happened to caused them grief). You decide that a second drive can fail before a first failed drive can be restored, but that you will never have three broken drives at the same time because you have sacrificed a chicken to to voodoo gods.
Now comes the difficult bit. In the IBM universe, how much space to you need to reserve so that you can recover all the data if two drives are broken? This is quite a difficult mathematical problem. The lazy answer is to choose a simple algorithm that uses more than the minimum required space. A less lazy answer is to get a computer to search for a better solution than the simple one you came up with. Mathematicians at IBM have solved the problem, and now the minimum required space (and the algorithm that uses it) can be calculated.
If you have been programming a while, you might well have bumped into single bit error correcting codes (If that day has yet to arrive, just remember to ask wakipedia about Reed-Solomon error correction). If you cannot make data packets small enough to get only single bit errors, you will need to look at correcting multiple bits at once. Mathematicians have done all the grunt work for you already, but they express it in their own jargon which is clear as mud to most programmers.
Lets start with binary and exclusive or. Programmers should have met modulo arithmetic. Byte 130 plus byte 136 is byte 10 because there was an over flow. In computing integer arithmetic is module 256 or 65536 or the bigger numbers for 32 and 64 bits. Now take that down to modulo 2 - the arithmetic for bits:
Mathematician's addition modulo two is the same as the exclusive or instruction from computing. Mathematicians know all about addition, so they use it whenever exclusive or is the obvious solution to a programmer. If you looked inside the paper, you will notice GF(2^b). That 2 is for arithmetic modulo two (Binary! :-). It has been ages since I had to deal with this stuff, but if I recall correctly, that b is the number of bits in a sector - or whatever data packet you need an error correction code for (mathematicians imagine a vector of bits). GF is short for Galois field (he his French pronounce it Galwa or mathematicians will know you are a programmer). In mathematics, a field is a set with an 'add' and a 'multiply' instruction. In this case, the set is all possible contents of a disk sector. There are a bunch of additional restrictions on how you define add and multiply. Today, add means make a new sector exclusive oring the corresponding bits of two other sectors. Multiply means bitwise and the corresponding bits. The fun bit (for mathematicians) is computing's 'and' and 'exclusive or' operations on sectors have the required properties for a Galois field. They can use all the things they have discovered about Galois fields (far more than most programmers can stand) to find the equations for recovering data from a RAID array with two broken drives.
We have almost reached the point where I am going to run away screaming, but there is one more piece of mathematics that I have actually programmed, so with any luck, I can explain it and perhaps three or four of you will actually need to use at some point in the next 50 years. Pretend we have N bits and a Gremlin can flip up to m of them. If m=0, we have 2^N useful messages. If m=1, for each useful message, we have to use (1+N) code points: one for the message received without errors, and N more for all the single bit errors. That gives us at most (2^N)/(1+N) useful messages. If m=2, we only get (2^N)/(1+N+N*(N-1)/2) useful messages. When we receive one of the 2^N code points, we have to work out which useful message was actually sent. Clearly we want the 'nearest' one. In order for there to be a 'nearest' one, we need a list of the useful codes and a function that converts two code points into a positive integer that we can call the distance between those code points. One handy definition of distance is the number of bits you have to flip to get from one code point to another.
If the minimum distance between any two useful code points is two, then we can detect (but not correct) some errors because there will be some code points that are the same distance (1) from two different useful code points. If the minimum distance is 3, we can correct some errors (if a code point is 1 unit away from a useful code point, it must be at least two units from any other useful code point). To correct two bit flips, we need a minimum distance between valid code points of 5. For small values of N, you could find a good set of useful code points and a distance function by trial and error. When N gets large, you really want some mathematics that find those things for you.
All this is really useful when you do not know which bits the gremlin flipped. In the IBM universe, disk drives can always spot when reading a sector failed. Instead of some unknown bits getting flipped, some known bits are set to 0. This requires some slightly different mathematics to to usual error correcting code programmers meet. If you get as far as page 2, the paper defines Partitial Minimum Spanning Distance and SD codes which are just acronyms to frighten journalist on page 1. I skimmed through to the end, and did not see any mention of performance, thrashing when a drive fails or error propagation when a bad read has the right checksum by chance. I expect this set up has worse performance and corruption problems than RAID 5. For most people, the extra disks required for RAID 10 are going to be cheaper than implementing IBM's pretty mathematics.
The key difference is not on the diagram. When a process on a CPU tries to access some memory, the address that the process selects is a virtual address (back then: a 32-bit number, now often a 64-bit number). The CPU tries to convert the virtual address into a physical address (a different number, sometimes a different size). There are several uses for this rather expensive conversion:
Each process gets its own mapping from virtual to physical addresses - this makes it very difficult for one process to scribble all over the memory that belongs to a different process.
The total amount of virtual memory can exceed the amount of physical memory. (Some virtual addresses get marked as a problem. When a process tries to access such a virtual address, the CPU signals this as a problem to the operating system. The operating system suspends the process, assigns a physical address for the virtual address, gets the required data from disk into that physical memory then restarts the process.)
Sometimes it is just convenient - the mmap function makes a file on a disk look like some memory. If a process tries to read some of the mapped memory, the operating system ensures data from the file is there before the read instruction completes. If a process modifies the contents of mapped memory, the operating system ensures the changes occur to the file on the disk.
In UMA, the CPU and the GPU access the same physical memory, but the GPU only understands physical addresses. When a process wants some work done by the GPU, it must ask the operating system to convert all the virtual addresses to physical addresses. This can go badly wrong because a neat block of virtual addresses could get mapped to a bunch of physical addresses scattered all over the memory map. Worse still, some of the virtual memory could map to files on a disk and not have a physical address at all. The two solutions are to have the operating system copy the scattered data into a neat block of contiguous physical addresses or for the process on the CPU to anticipate the problem and request that some virtual addresses map to a neat contiguous block of physical addresses before creating the data to go there.
Plan B looks really good until you spot that the operating system might not have such a large block of physical memory unassigned. It would have to create one by suspending the processes that use a block of memory, copying the contents elsewhere, updating the virtual to physical maps and then resuming to suspended processes. It gets worse. That huge block of memory cannot be paged out if it is not being used, and the required contents might already be somewhere else in memory so it will have to be copied into place instead of being mapped.
All this hassle could be avoided if the GPU understood virtual addresses. That would cut down on the expensive copying (memory bandwidth limits the speed of many graphics intensive tasks). The down side is it adds to the burden of the address translation hardware which is already does a huge and complicated task so fast that many programmers do not even know it is there.
Microsoft may have bloated the netbook to its death throws, but the small cheap computer is back:
Intel have worked out that they need to compete with ARM, and they have to drop their prices to do it.
If I really need real time, I use a cheap dedicated SoC. Debian's official repository has 29000 software packages. I am admit that is not quite enough for me, so I use a few unofficial repositories too.
PS: Eadon - my Pi's are doing a useful work, but their hardware would not be my first choice for a smart phone. Top of the list of problems would be VideoCore. I have more confidence in the availability of Lima drivers for Mali on future ARM cores than for anything open related to the VideoCore DSP - if you can buy a modern CPU with one.
The need something to compete against the ARMs:
https://www.miniand.com/products/Hackberry A10 Developer Board
Battery memory was a popular diagnosis for any reduction in charge retention no matter what the real cause. It is much easier to blame battery memory than to actually find out what the problem is. Here is a simple test for memory effects: Is the battery in orbit and charged by solar power? Without that precision repetition of charge cycles, you should be looking elsewhere for the cause of reduced battery performance.
That is a summary of liquid propellant research (the good, the bad and the insane). They tried everything at least twice (US Navy or Army funding). I cannot imagine work like that being repeated today for at least three reasons. Governments are out of money. When they do spend money, they are useless at asking for what they need and even if they get that right, they ask companies that are experts at taking the money, delivering nothing and getting another contract to do the same thing again.
After reading about it, I can see why 50's tech is so enduring. (Spacex Falcon 9 is kerosene/LOX.)
Home users were supposed to get cheap Windows ME and anyone who needed to do anything was supposed to spend hundreds on Windows 2000. ME was a disaster in its own right, but the nail in its coffin was Linux with free stable multi-tasking. XP had its priced slashed for home users because Microsoft needed a competitor for Linux. Next came a whole stream of 'Linux not ready for the desktop' articles, followed by 'get the facts' wrong and still they could not increase the price of XP.
Eventually Microsoft got an operating system that they could charge for. Just when XP was about to be killed to force expensive Vista upgrades, out comes the small cheap computer running Linux. XP users got a stay of execution while 'small cheap computer' got transmogrified into 'netbook' and obscurity.
UEFI and secure boot are here to block Linux, so XP is no longer required. Windows users can now enjoy the yearly price hike an bi-annual hardware refresh that Microsoft planned for them 13 years ago. Enjoy porting to TIFKAM and get ready to port to Microsoft's new fashion statement every two years because Microsoft think you cannot join the penguins any more.
Fan: it may be quiet now, but a year from now, the noise will be annoying. It sounds like it is not a standard size, so replacing it will be a pain.
I have 5.6TB attached to a Pi via USB2. Fine for video playback, but for anything else, you really notice it is not SATA. Copying from one disk to the other rubs it in, and the network connection sharing USB bandwidth really hurts. At ten times the price, and Intel's NUC missed the opportunity to do better. After a minute of searching I found a $220 thunderbolt hub to get 1GB ethernet and some USB3 ports. That hub has DVI/HDMI, audio in/out, and must has a decent CPU inside - without a fan. What a pity that it needs a computer to use it.
The most outstanding feature of this product is that it comes with no OS installed. People can decide for themselves if they want to pay Microsoft tax. Ubuntu installed without hassle, but Windows didn't. Does Microsoft's current 25% market share mean they still deserve to be called mainstream? I think 'legacy' is a more appropriate adjective.
Nokia did not say which patents were allegedly infringed. This usually means that if there are any infringed patents at all, then they are invalid. VP8 was designed from the ground up to avoid patent infringement. The usual way to do that is to base the design on expired patents. The biggest technical complaint against VP8 is that it is old tech. If Nokia say VP8 is "no better than the existing H.264" then you can be sure that they wanted to say it is worse, but had no evidence. Releasing a statement through FOSS patents is also very suspicious. It is like they wanted to tell some really whopping lies, but were concerned about possible legal backlash. Florian was the only guy mercenary enough to repeat what he was told.
Nokia is in its final death throws looking for a way to bump up its sale price. They either hope that Google will buy them out or that a troll will buy them before it becomes clear that the 'VP8' patents are bogus. There is no way that Nokia itself could win a patent fight. It is hemorrhaging cash way to fast to last long enough.
I eventually discovered the problem was lack of a particular flavour of memory. Adding more memory solved nothing as it was the wrong flavour. The 'solution' was to reboot every hour or two - to be certain that the 'save' option was still available. I recommend abiword, kword, Libre Office or one of the many other fine choices available to penguins.
PS: Mrs D should think twice before attending PyCon.
If you are going to quote de Icaza as an open source guy, you might as well quote Florian Müller for Google and the pope for atheism.
Microsoft hired Novell to port .NET to Linux. Novell put de Icaza in charge of the project (Mono) and he has been trash talking Linux ever since. As Microsoft has been working hard on poisoning all their business relationships recently, it is hardly suprising that he has turned to Apple.
As we are here, lets look at de Icaza's complaints:
I have never used OS X, so I cannot comment on how well resume works on a Mac, but booting up a Linux laptop only takes a few seconds. I have never bothered to try suspend and resume on Linux.
Wifi did have me stumped for a while - my first wifi card was broken. After replacing that, wifi has worked solidly. Early on, lack of drivers restricted the choices for hardware. These days, you have lots of choice, but checking out http://linuxwireless.org/ before a purchase will let you select a card with all the wifi modes supported out of the box.
I got burned by video drivers once in 2002. Since then I have taken care to read up on graphics chip support before making a purchase decision. Support for the newest hardware is usually poor or absent. The exception is Intel, who have done an excellent job of providing quality drivers for their graphics chips.
I have not had to recompile a kernel to adjust this or that ever. I have not had to compile a kernel (or even a module) for years. If you really need to squeeze an extra percent or two of performance out of a box, there are plenty of kernel parameters to twiddle in /proc without having to compile anything.
I have never had to chase the proper version of a package for the current version of Linux. I can understand that this is more of a problem for someone working on the next release of SUSE Linux. The idea that a Linux developer would ever have to "beg someone to package something" is ludicrous. If a specific version of an obscure tool is not packaged up and ready for my distribution, I download the source code and use the distribution's packaging tools myself. I find it had to believe that the lead developer for Mono cannot do this - after all, someone has to create mono packages for SUSE. Why on Earth is de Icaza begging people to do his homework for him?
I can completely understand that while he worked and SUSE, he chastised people for not using mono. I have 3275 packages installed on this laptop and none of them depend on mono. It is not a popular technology with penguinistas.
My package manager shows 41125 packages available. The other main distributions can claim a similar number, and to a large extent, they are the same programs. How can de Icaza claim that there are incompatibility problems?
Finally fragmentation: When KDE went in a direction I did not like, I switched to Trinity. When Gnome did something different, some users went to MATE. Lots of choices, no need to beg anyone to package anything. When Windows 8 came out, lots of people screamed and whined. They have never had anything like the choices available to Linux users. I hear very few complaints about Apple's user interfaces, but there are some like the frustrating spelling corrector on iPhone. I am sure those Apple eaters would love a taste of fragmentation.
I am quite happy for people to poison themselves with alcohol. It would be nice if alcohol taxes paid for the collateral damage. Policy based evidence makes it hard to work out if that is happening. There are cost/benefit figures for alcohol all over the internet that support wildly different figures.
Total NHS costs: £128Billion (http://en.wikipedia.org/wiki/National_Health_Service#Funding)
It is not clear if that figure includes payments by insurance companies for road traffic accidents - some of which are alcohol related. Take your own guess at how increases in premium caused by alcohol related traffic accidents is split between drinkers, drivers and drunk drivers.
Alcohol taxes £10billion: http://www.hmrc.gov.uk/statistics/receipts/receipts-stats.pdf
Now try to subtract the cost of collecting that tax, and find out if "customs duties" includes taxes on imported alcohol. Popular figures without citation on the internet are £15billion revenue and 2.5billion collection costs.
An early death from alcohol reduces the return on investment for state funded education, but also reduces NHS costs for care of the elderly. Different ways to account for that sort of thing can match the evidence to the desired policy. At some point, increasing the tax rate does not increase the tax revenue. Perhaps a few people will cut alcohol consumption. Some will cut costs elsewhere and the rest will brew their own. Distillation requires a license, but is not technically difficult - I remember doing it in school (1ml/year was legal for educational purposes).
get_state_sales_tax is not defined. It should probably be a switch into a separate functions for each state. Remember to update each of those functions every year, and include the date of the sale to handle introduction of new tax rates. item_type_cd is not a simple propertry of items for sale. It depends on the state and the date. Compound items way have separate tax rates for different components. v_shipping_value is not defined. At a guess, it is the shipping cost for the entire cart, so it is getting multiplied by the number of different items in the cart. You did not include the quantity of each type of item.
This only deals with state tax. The web site will be showing one price will the customer shops, then a completely different price when you know what state he is in. No explanation for the price increase is given so there will be expensive phone calls from angry customers.
Premature/defective optimisation: skipping the multiplies for multiplies by 0 with a conditional branch is often more costly than just multiplying by 0 and adding 0. As it introduces extra code paths and extra tests I would leave out the conditional branches until processing speed becomes an issue, profiling shows this is a place to work on, and the change actually shows some benefit.
Biting the hand that feeds IT © 1998–2019