Re: Apple to fix new maps app
And Lo, on the eight day, after his day of rest, lord Jobs invents cartography.
Ah, but did he invent the Dymaxion projection?
1373 posts • joined 8 Nov 2007
And Lo, on the eight day, after his day of rest, lord Jobs invents cartography.
Ah, but did he invent the Dymaxion projection?
They don't have any cell phones any more?
Oh no! Where is mein handi?
And we are RIGHT IN THE MIDDLE?
Unlikely. That's just an effect of isotropy. If everything is receding at an equal pace from everything else then every place seems to be the centre of the universe. No doubt many other planets have astronomers that are still struggling with heliocentricity, so I wouldn't feel too bad about the mistake you made :)
I can only imagine you've never tried to code against [X11] Kafka couldn't have done better.
I don't know. The client-server paradigm they use is pretty cool (even if they decide to swap the names around). I think if you really want Kafkaesque then you have to be an iphone user. Your arms and legs may no longer be in the place you expect them to be and and you're experiencing difficulty coordinating your extremities to perform what should be a mundane task, but still all you have in mind is asking Siri whether you can make the next train in time for work.
Newsworthy because its a load of kernel and other code ...
Yep. I haven't looked yet, but it seems that they'll have fixes for two problems I ran across...
1. No native kernel driver/firmware for some quite popular (read: cheap) wireless dongles.
2. Fix for excessive interrupt rate (dwc_otg.fiq_fix_enable=1 now the default) as mentioned in last paragraph.
I managed to find the fix for these myself thanks to the excellent forum and blog posts that people are making about the Pi, but it's certainly to be welcomed to have these baked in for less technically skilled users. As for the overclocking, the rpi version of xbmc has been overclocking to 800Ghz for quite a while (and there's been the option to do it in the /boot/config.txt in regular pi distros too). Nice to see that there's room to push the envelope even further and still be safe.
Heh... if you were Huckleberry Finn you'd just nick the one off the neighbours windowsill.
simultaneous events are ones which occur at the same time according to an observer
Hmm.. I was going to pounce on this and ask "yes, but relative to what observer?" My point being that simultaneity is a relative concept. Then I reread what you'd written and realised that you hadn't made the mistake I thought you had (when working in a relativistic framework).
Still, at least I get to post a link that explains it a little bit better
at 25:25 explains it all
if man is still alive ...
Re: no SD on Neuxs or Nexus 7
Yes, Google also fucks up
The way I see it, the lack of an upgrade slot was a deliberate design decision to achieve two things. First, since they're basically subsidising the hardware, they want to keep costs down. Second, they don't want to piss off the other Android suppliers by making a phone or tablet that's too good (again, particularly if they're subsidising the cost). I'm only surmising this, but I feel that they want to produce something that's a pretty good showcase for Android, but want to avoid being accused of "stealing" the market from other Android makers. So I see the lack of an SD slot here as being kind of a middle ground, with the assumption that if users want to upgrade, they'll check out the other android manufacturers.
A few years ago I was using a basic Nokia 6310 phone. I think that's the model number. Anyway, it had pretty good standby time for the most part--probably about 2.5 days. The big problem with it, though, was that if it went out of coverage it meant that it ramped up the power to the GSM radio so if I forgot to turn it off or keep it on charge during the day (we had very bad coverage where I worked), I'd have a flat phone by the end of the day. That's the first reason people want a replaceable battery--as a backup in case they get caught with a flat battery and no easy way to recharge once they notice it.
The second reason is that batteries deteriorate over time. A three-year old phone won't last as long as the day you bought it. I agree totally with the article here--it does seem like a very cynical ploy by Apple to keep you on the upgrade cycle to the next shiny, when really all that's wrong with the 3-year old phone is that it needs a new battery.
On a similar (ahem) note, I think someone should do a Lisa Simpson on it. Bring along their saxomophone, do all the rehearsals and then on the night do the signature solo and swiftly exit off stage.
Well, probably not.
I guess they might be able to find a job in a different profession that's better paid.
Not meant to be taken seriously, of course. I just enjoyed the film!
Looks like the "write once, read never" approach. Makes me wonder why I bothered.
And yes, I did use the "send corrections" link.
Probably not very good. Ray tracing tends to exercise the I/O an awful lot so even if you assign one pi to a particular section of the screen it'll still end up accessing other parts of the scene in a pretty random access pattern (rays bounce). With only a 100MBit connection (and the fact that the USB and Ethernet share a bus) it's easy to saturate the available data channels--a problem that only gets worse as you scale up (though working with different net topology and having more control nodes could definitely help, to a degree).
On the other hand, having the farm render a typical fractal image would be a perfect application for it since each screen section is typically independent of each other one.
Despite how impractical this thing is, I'd still love to have one. I'm sure it's also a great teaching resource in spite of (nay, even because of) its shortcomings, necessity being the mother of invention and all that.
"What is the point ... with Fanboi trolling?"
Maybe because it's easy to get a rise out of Apple fans with it? Low-hanging fruit, you might say.
Well think about ... with that level of bandwidth maybe some kind of immersive virtual reality setup would be possible. Maybe you wouldn't even need to know you're in Kansas?
I had very similar thoughts on reading this. At first I was thinking, what's wrong with adapting the Transport Stream protocol, but that's not exactly scalable to differing pipes or screen sizes. At best it'll let you tune transmission for poor quality connection. My next thought was to use something like the "progressive" modes in JPEG, PNG, or (IIRC) DIrac (or similar trick as used in FLAC audio, separating the stream into a lossy part and a set of deltas). This would be much easier if the encoding system was based on wavelets (again, I think Dirac does this), but an FFT-based system can work too. The problem there, though, is how to do flow control so that the sender can stop sending the high-detail part of the stream. Then it struck me... why not use UDP for the fine level of detail and use TCP for the base image? You'd still have to do dynamic tuning on the encoder side (and a bit of buffering and stitching together at the decoder end), but at least the congestion part could be mostly handled by the network itself.
I also don't like the way that DRM is being baked into HTML5, but it's also hardly surprising. Sad, though.
Haven't Amazon been outed as doing this for a while? And in this august organ, no less...
No-one comes out smelling of roses (surely it's all relative, but Einstein was the last great patent clerk).
We will control the horizontal. We will control the vertical We control the diagonal.
proves intelligent design? Really?
I've got this banana over here. It might help to clarify His Pungent Effulgent. (or maybe not)
The US has black president and white rappers and now Sony seems (if I'm not dreaming) to be actually supporting Linux.
A vibrating ring for haptic feedback might be handy, though. Stop sniggering at the back!
No sniggering here. I do think that memory wire would be a lot nicer than mere buzzing. For "handling" 3d objects, obviously.
There is nothing beyond the edge of the solar system, it's just a big black board with pictures of stars on it.
Reminds me of Omon Ra by Victor Pelevin. On what really happened with the CCCP's space programme.
Or they might just hear a loud *thunk* as it hits the edge, Truman Show stylie.
Or maybe it just wraps around, Misner-space stylee :)
the patissier sues the boulanger for using the same oven-based technique for cooking food.
Surely, since this is croissants we're talking about, they'd sue over the method of folding in the edges to make them nice and curved ;-)
So everybody is Everywhere Girl now? What a bunch of cheapskates.
Sounds like a job for an inanimate carbon rod to me.
My what now?
The current sense of entitlement in IT is shocking.
It's has nothing to do with "our" sense of entitlement and everything to do with Oracle's moral responsibility. Think of Java as being like a teenager going out into the world and Oracle being its guardian. It's up to Oracle to ensure that their brat isn't going to become a public menace. A very large software ecosystem is built around Java and people need to be able to depend on it. At this rate Java is sure to end up hanging around with Flash, and that definitely won't end well.
Math is not patentable so don't try your logic on us.
Shouldn't be, but that didn't stop the patents on RSA encryption, Lempel-Ziv-Welch compression or Arithmetic encoding, not to mention the myriad other patents surrounding video and audio compression and even bloody container formats.
FOSS code is copyrighted. I think you're confusing it with "public domain" (as defined in the USA).
Why do they need microsecond timing on graphics generation when the viewable result changes only about every 20 milliseconds (roughly)?
Possibly because more complex games will involve multiple rendering passes, probably with tunable parameters for LOD and the like, and being able to budget accurately can make all the difference between being able to hit your window for accurately syncing to the next frame update or not. Just a little drift and you end up losing frames. You might also have to account for vsync, and being off there will really screw things up.
Shenanigans galore smashing doors (via doire, for oak?) into smithereens with a shillelagh after drinking whisky and blathering on with a thick Irish brogue about seeing banshee and leprechauns.
This is amazing that no real linguists have answered this thread.
Yeah, I was hoping for that, hence my tongue-in-cheek post about phrenology and so on. I'm still kind of curious about the names of the days of the week, and it would have been nice to have a linguist give an explanation. It's nice to know that many Europeans have a God/Sun day and a Moon day (along with other planetary namings), but that doesn't explain why Japan has (and apparently China had at one point) pretty much the same system. Is it actually a case of parallel evolution or did knowledge of the planets and the fashion of using them for naming the days spread via language?
Another coincidence I've noticed between east/west is "-bury" in the UK at the end of place names and "-buri" at the end of place names in Thailand. Is it just coincidence or does it denote a common root language (Sanskrit/Indic languages)? Again, I have no idea, but it would be nice to know...
Gotta chime in here too. It sounds suspiciously like they're using phrenology to back up their claims :-)
Seriously, though, it's all well and good pointing out similar linguistic constructs and then jumping to a conclusion, but a lot of this stuff might be coincidental or maybe a case of parallel development (why is the first day called "Sun" day and the second "Moon" day in so many languages, for example? I don't actually know--just throwing it out there). I'm all for clever theories but the problem with many linguistic theories is that they're not falsifiable. That being said, what's the point?
Beer, since they seem to have run out of jynnan tonnyx.
No, but as a fellow (geodesic) dome dweller I can totally sympatise with you on the exorbitant prices they charge for curved sofas.
I suppose that people eat dumplings nearly the world over. My favourite would have to be Japanese-style gyōza. Mix up minced pork, cabbage (finely chopped, lightly salted, then squeezed to remove moisture), spring onions, shrimp, ginger, garlic (all finely chopped or minced) and sesame oil and for the filling with just plain flour and water for the wrapping. There are as many ways to cook these as pierogi, but I think the best is to fry them first in a very small amount of oil then put a small amount of water in the pan and cover it so that the steam cooks everything. Remove from the pan when all the water evaporates and serve with a mix of soy sauce and chilli oil.
Besides tasting delicious, they look great too if the edges are pleated properly (very fiddly to get exactly right, unfortunately).
I'm reminded of that joke in Trading Places. You know... the "look at that S car go" one...
pretends to be something useful in order to trick
Like a giant wooden horse, for example. Someone should surely be able to find a use for that.
re: change in ratings, perhaps it's because in this review it's stacked up against other sub-£100 phones. As an android phone it might get 80% overall, but 90% if you're buying on a budget. That's my take on it.
I might as well recommend looking up John Cooper-Clark's "Evidently Chicken Town." It's probably just as relevant (hint: not).
Yes we're all different..
Eh, "individuals", surely? "Different" (like Apple?) might make some sort of vague sense, but let's not mess with the canon here.
OK, so I'm not doing this for a PhD, but there are some fairly obvious improvements possible.
The main one involves an anonymous broadcast protocol, eg some version of the Dining Cryptographers problem (hello, "suppernode"). It's a simple protocol where all the diners flip a coin in secret between himself and his immediate right-hand neighbour then they announce "same" or "different" (there are as many coins as there are diners, and each diner can see two coins). If everyone is truthful then a parity calculation should always yield an even number of people saying "different", but if someone lies they throw the parity calculation off and so they've effectively broadcast one bit anonymously.
That algorithm isn't practical for tor, but there are other algorithms for achieving the same results. Mostly they include a fixed hierarchy (think nested dinner tables and "suppernodes" again) but they'd be much better if there was some sort of protocol for assigning "seating" on a random, ad-hoc basis. All these protocols also need to be hardened against deliberate disruption. Search for "dining cryptographers with cheaters"...
Chaffing and winnowing is another sort of protocol that might make sense. If you can set up a secure data channel to your egress point and you have some method of guaranteeing that the channel goes through some number of nodes whose sole function is to introduce random noise into the conversation stream, then a shared, private (configurable) checksumming algorithm is enough to defeat eavesdroppers with any required degree of confidence (it then becomes a form of probabilistic encryption).
The third alternative I can think of is to leverage the store and forward aspect of the network. Tor has some specialised DHT (distributed hash table) functionality but as far as I can tell it's not used in the normal case of operation where you just want to connect to a particular site anonymously. In other words, tor is mostly just forwarding packets and not acting as a storage network for the most part. What I have in mind is changing things so that tor would act more like a short-lived storage network, with each chunk of data effectively having a "half life" and having peers shunt around parts of each chunk among members of their ad-hoc peer groupings and randomly dropping a fixed percentage of all chunks. I won't go into details (the algorithms I have in mind aren't too hard, though), but it should be possible to maintain a coherent DHT even in the face of all this deliberate data loss and also in the presence of cheaters. The point of this is to spread delivery of your requested data (web page) out in the time domain so that even if an attacker can snoop your incoming and outgoing traffic and he has also subverted the exit point (so he knows what web page was downloaded) they can't prove that you were the person who requested it. I suppose what I'm basically saying is that there are algorithms and protocols that would enable a "probabilistic delivery" model with tunable parameters for how often you want your download to succeed within a given time frame (the "half life") and what level of protection you want against eavesdroppers in the worst case scenario. I guesstimate that the network would only have to store something between 6 and 10 times the volume of peak traffic of the equivalent "forward-only" network for this "probabilistic delivery" method to be viable.
like "military intelligence" or "jumbo shrimp"?
Or maybe "King pawns?"
I'm probably not a typical user, but what I'm mainly using Dropbox for is to provide a synch mechanism between my real XP installation and the version that I run in a virtual machine under Linux. Even then, there are actually only two applications that I regularly run in the VM that I want to keep synched, and the amount of data isn't very much at all.
I wouldn't trust these kinds of services for backups unless (a) I had some sort of front-end encryption meaning they couldn't snoop on what I'm storing and (b) I already had better/more secure backs in place anyway (say using something like Dropbox for daily backups, but making sure I do my own weekly one to my own backup box). They are kind of nice to have for "ad-hoc" backups as you put it. I could see myself using the "selective sync" option in Dropbox to set up a spare machine as an occasional repository for (relatively small) backups. I like the way that you don't need to power up the machine straight away (just get to it when it's convenient and start Dropbox to sync into your local copy) but obviously it does mean double the data transfer burden on your net connection. That's not an issue if you're syncing between home and work machines, though, and one or the other is firewalled so you can't do push transfers.
So actually I think that it's making data transfers and file sync easier is the main advantage of these things--and not actually as a primary/backup source for your data. Also remember not to trust that they won't read your stuff or screw up your files irrevocably once in a while.... otherwise, it's a pretty nifty tool.
So you call for citations from AC, in the very same sentence as you say "I think it was Apple and Microsoft spreading FUD about Vorbis infringing on unspecified patents"? Where are your citations?
There. Happy now?
And by the way, if you go back and read what I said again, maybe you'll understand it in the way I intended: there indeed have been rumours of Ogg violating patents. I [thought] it was Apple and Microsoft making those [foundless--hence "FUD"] allegations.
Have a nice day, Mr. AC.
Finally an agreement then that Java requires special* hardware to run it with decent performance?
I can't tell if you're joking or not, but are you aware that some ARM devices have the capability of running Java bytecode (more or less) natively with Jazelle? That's not to say that Java needs special hardware, and it certainly wouldn't be the first time that hardware has been built with support for a particular high-level language in mind--most notably the Lisp Machines of the '80s. The wikipedia page for them (http://en.wikipedia.org/wiki/Lisp_machine) also mentions some other languages where special CPUs/computers were built: Prolog, Modula-2, Erlang and, yes, Java.
That certainly doesn't mean that any of these languages need special-purpose hardware. I think it's more a case of "if we can improve performance by building custom CPUs/machines, then why not try?"
While not specifically to do with GPUs as such, I did some research a while back into the possibility of getting a Java VM that could run on a PS3 so that it would be able to run code on the SPUs as well as on the main CPU (PPU). Besides the approach of actually adding new keywords to the language (as is mentioned in the article), I found two projects that actually got some way towards the goal. Both aimed to take unmodified Java code and have it run on an asymmetric CPU setup (ie, the PS3's Cell). The first (*) was based on the CACAO JIT compiler and hooked into the function call mechanism so that each method got executed on a separate core. I don't think the author got very far, but he did go over a lot of options and documented a lot of the design decisions very well. The second (**) was based on the Jikes Research VM and it used thread creation as the point to migrate control over to a new core. That project got a lot further, but I don't think they ever publicly released the code (although I'm sure an email to the authors would probably get you access). Again, the various papers and such that they produced give really good descriptions of the approach taken and all that.
Where I'm going with this is that it's hard enough to target the JIT code generation so that it can run on an asymmetric setup like that of the Cell processor. It's much more difficult when you try to target it to graphics hardware. OpenCL is a nice step in the right direction for doing GPGPU work, but graphics hardware design (except maybe for the really high end stuff--I'm not sure) still tends to be geared towards fixed ideas of the execution pipeline (eg, shader models, emphasis on textures and matrices) and there are generally fairly high penalties for such things as branching, sending data back out to the CPU (outside of the frame buffer mechanisms), context switching and inter-core communication (again, if it lies outside the standard shader/pipeline model). I would love to see GPU cores and the interconnects between them and the CPU moving more towards the Cell model, but OpenCL notwithstanding I don't think we're making much progress in that direction. Likewise, I'd love for these guys to succeed, though I think it's going to be a long hard climb.