53 posts • joined 4 Mar 2010
Re: Industrial equipment on Android
Depending on what sort of equipment we're talking about here, I completely understand the desire to have something which just works. I don't have any personal experience with Windows CE, but with the number of ATMs, electronic billboards, departure boards etc. I've seen either sat at a desktop, or adorned by a conspicuous error pop-up with obvious Windows GUI stylings, I'm not sure I'd class Windows as something which just works. IMHO, a UNIX is a better choice if you want a box which just does one thing (and does it well), because it's so much easier to make it boot directly into the necessary state, and to remove extraneous software. I can only speculate that the reason this isn't often done in practice is simply due to the huge pool of developer "talent" available surrounding Microsoft technologies.
Industrial equipment on Android
You probably won't, since it would make more sense to use Linux with a more standard userland, such as an X server for graphics. I wouldn't be surprised if such stuff already exists - UNIX (which Linux intentionally apes) has been around since the seventies, and X since 1984, with X11 in '87, according to Wikipedia. I've never understood why people put full installs of Windows on what are, effectively, embedded devices.
Re: HTTP is not so much an application as a transport these days.....
So which of those Ts stands for "transport", then? Oh, wait... that would be *neither*.
The part of HTTP that everyone forgets about in the context of streaming media is the H. Text and binaries don't benefit from a lossy transport mechanism, since the client needs the original data intact. Streaming media easily can, but isn't being given the chance.
As for being easy to shove through proxies, I can only hope this DASH nonsense actually obeys the HTTP standards, instead of trying to get away with a vague resemblance and happening to use port 80.
What's it missing? The ability to use content from anywhere except YouTube and the Google Play store (movies & music). Officially, anyway. No content from the LAN at all, despite the network connectivity.
Even *with* the ability to play existing content from a LAN, one could still argue it's over-priced. I'm not a Google hater - I use their online services, have a Galaxy Nexus, and am thinking of getting a Nexus 7 - but I really, really did not understand the Q. Here's hoping the come up with something worthwhile.
Re: Daft twat (singular)
For people with their own incomes, and the time & knowledge to set this sort of thing up, hacking on an existing home computer or dedicated VM is fine. However, this assumes people are comfortable tinkering with their home PCs, and have permission to do so. For a lot of schoolchildren, though, the home PC is expensive, runs Windows only, and is not to be modified or broken on pain of death.
Now consider the Pi. Given the size & price point, schools can reasonably equip entire year-groups with these, even allowing the students to take them home for homework/coursework use. Teaching materials can be created with the knowledge that the entire target audience is running common hardware and software. Teachers and IT departments can rest assured that unless a device is physically broken, it can always be fixed by re-imaging the SD card. The educational release later in the year will have a case, instead of being just a board, and quite probably some printed reference material.
It's a fine device for tinkerers, which is why I have two, but let's not forget what it's really being aimed at.
Mine do get used
I don't understand why there's so much hatred for the Pi coming from some people. Yes, the media hype got a bit ridiculous at times, but if you actually read the Foundation's website, as far as I can tell they've always been open & honest about exactly what the device is. Their only fault, perhaps, is in not stating firmly enough that the initial release, minus case & manual, was supposed to be aimed at developers. For example, anyone moaning about performance ought to read the following bit of the FAQ:
"That is, graphics capabilities are roughly equivalent to Xbox 1 level of performance. Overall real world performance is something like a 300MHz Pentium 2, only with much, much swankier graphics."
Sounds about right to me, with the possible exception that you only really get that performance via OpenGL ES or hardware video decoding, not currently via your average 2D X desktop. Bear in mind that OpenGL ES is the same 3D graphics API found in Android and various other mobile/embedded things; it's worth learning in this day & age if you want to program for that sort of thing.
Anyway, mine do get used (not a typo, I have two - ordered one from RS, and one from Farnell). One sits in front of the TV and runs XBMC, with the SD card occasionally getting swapped out for me to tinker with some graphics programming; the second is going to be turned into a NAS box, and maybe a few other kinds of light-load server, since I don't currently have an always-on box at home.
Re: Some Interviewers are like generals...
I wish that guess were true, that would make me feel a bit better. I know nothing about COM or OLE, don't have it on my CV, and have no desire to get involved in that sort of thing.
Sadly, the position was (supposedly) developing a new product. On Linux.
Am I going mad?
I just had a telephone interview yesterday, and found out an hour or so ago that the feedback wasn't good. Apparently my skills aren't strong enough. However, I can't shake the feeling that at least part of the problem was on the interviewer's end, and that I wouldn't necessarily have wanted the job anyway.
To clarify, this was a technical interview, focussing on C++. The questions asked were a bit odd, though. Things like "can you throw an exception from a constructor?" (yes, but many people neglect to wrap construction with exception handling), "is the destructor run when you do this?" (no), and "how are virtual methods implemented?" (with a vtable), fair enough. But then he spent a good 10-15 minutes banging on about the behaviour of throwing exceptions from destructors, and about the validity and behaviour of running the statement "delete this;" from inside a class method. Now, I would regard both of these as generally bad ideas, capable of leading to memory corruption and quickly leading to the no-man's land of "undefined behaviour". There are circumstances in which they are valid and can work - e.g. an object can delete itself as long as it was allocated with "new", and providing that none of the code from that point onwards tries to access non-static data members - but for some reason the bloke seemed really interested in probing my knowledge of these areas, with some of his questions verging on incomprehensible.
Now, generally, I would expect any decent C++ developer to have a basic understanding that these things are bad ideas, preferably backed up with at least some idea of *why* they are bad, and to avoid using them as a pattern unless absolutely necessary. I would not expect them to try to defend their usage too vigorously. Yet I can't shake the feeling that it was this part of the interview where I stumbled, since it was one of only two points where we ended up moving on with something unanswered.
So am I going mad, or was the interviewer barking up the wrong tree? Would you even want to work on code developed by people who expect deep knowledge of such esoteric and dangerous usage of the language?
Or is this really the kind of thing I should be reading up on if I want to get past phone screening in this day and age?
Apple and Google the HTML5 champions?
What about those folks at Mozilla? They seem pretty keen on all this HTML stuff.
Google push it because it's the best (well... only) way to get their services onto all platforms, and Apple push it because... well... actually, I've never really quite understood why Apple push it. I dislike Flash as much as the next man, but there's a reason they ended up having to backtrack and release a native development kit for iOS.
Re: It wasn't confidential anyway.
This doesn't excuse the fact that they are storing people's account passwords in the clear, and exposing them to random site visitors. The security implications have nothing to do with the consultation itself, or how they will use people's responses, and everything to do with exposing the names, email addresses and passwords of people using the site.
Re: The form is addled
The form is addled because you are logged in as someone else. Is your Twitter account @apmapmapm? If so, I logged in as you at one point. Might want to change your password.
I am deadly serious.
If you don't have an account with the DfE already, DO NOT MAKE ONE. I have clicked through to the consultation link TWICE now, and ended up logged in as TWO DIFFERENT PEOPLE (hint: neither of them was me). It is possible to view people's account details.
Get in touch with them NOW and demand an explanation. This is appalling.
I really wish people - especially journalists - would stop writing "retina display" in lower case, with no kind of trade mark qualifiers. It's not an accepted technical term, or an industry standard, it's a piece of marketing bull from Apple. Why is everyone so taken in by it? It is not magic, just a display with a high pixel density.
To be fair on Shteiman, his original blog post calls it a DoS tool. It is everyone else commenting and reporting on it who seems to have forgotten that the second D means "distributed", presumably because DDoS is still such a hot buzzphrase that they felt the need to use it without really thinking it through.
I really hope OP is trolling. If not, this is just depressing, because the correct response to playing with fire & getting burned is not getting angry with the fire.
Re: Don't get it.
I don't get it either. They've reduced the latency to "only" 160 milliseconds - anyone here remember the bad old days when your ping would rise to those sort of levels in Quake 2 and you'd be left dead in the water?
Sure, there might be some clever trickery in the drivers to push the image stream out directly rather than have to screen-scrape it after the fact, but it's nothing you couldn't replicate at various levels of the stack. In fact, it's already been done on Linux, in a far more generic way via "virtual CRTCs": http://www.phoronix.com/scan.php?page=news_item&px=MTAxMDk
Basically, instead of hooking a GPU up to a physical display, you tell it "there's one over 'ere, guv, honest", then scrape the framebuffer off into whatever pipeline you so desire. For example, a video compressor and a streaming server. "Real" hardware accelerated 3D for VMs is also already covered by the open-source stack (http://wiki.xensource.com/xenwiki/GPU_pass-through_API_support), though presumably NVidia have managed to lift the "one VM per physical GPU" limitation.
Kudos to them for bringing it all together into a finished product, but give it a year or two, and you'll be doing this with KVM or Xen out of the box on Fedora, with open-source all round.
Re: Save icon
The default GNOME 3 icon theme uses a filing cabinet, which has obvious advantages in that it is a logical extension of the whole file/folder metaphor (as in, saving = putting a file into the cabinet), and it's completely agnostic to the actual storage medium. I've also seen icons with hard drives, although those tend only to be found in installers - fewer people are likely to know what a hard drive looks like than a filing cabinet, and you may not be saving to one at the time.
Stupid Interwebs. Parent post was supposed to be in reply to Bradley Hardleigh-Hadderchance.
Dark IDE backgrounds FTW!
I agree with your points about dark backgrounds in IDEs. I spend most of my time looking at Vim in a green-on-black terminal emulator with the default colour scheme and "set bg=dark". gVim gets its colour scheme set to the slightly more subtle "desert", and last (also least) comes gedit with "oblivion". Colour is good, but if I'm going to be looking at something for an extended period of time, bright backgrounds are right out!
GNOME 3 apps generally do a very good job of being visually consistent with each other, but GTK3's defaults also suffer from being very monochrome. I can handle grey-on-grey title bars and scroll bars, but icons on buttons & in menus always get re-enabled on my machines. Icons aren't always completely obvious, but when I first used GNOME 3 I noticed I had a hard time taking anything in when looking at a menu, because I kept habitually scanning down the left-hand side and drawing a complete blank.
Re: Still suffers the same problems
GNOME Shell != Unity.
No wonky scrollbars, global menus, dashboard, HUD or related software recommendations here, matey. (Yes, I am aware that the GNOME folks have their own global menu-esque intentions, and am hoping it doesn't cock up something which is otherwise reasonably sensible and usable.)
Re: Unified File System Layout
Re: New FAIL
Blame the kernel, Xorg and/or the drivers for whatever graphics hardware is in that box, but not GNOME 3. Not being an apologist, just you're blaming the wrong bit of software when something as basic as modesetting doesn't work.
Re: GNOME 3.4 continues to polish GNOME 3...
That doesn't change the fact that your earlier complaints about desktop environments taking up too much screen space, scattering interface elements around the screen, requiring a "show desktop" button and encouraging excessive mouse movement can't really be levelled at GNOME Shell, IMHO. (The "Activities" button in GNOME Shell is *not* a "show desktop" button; it opens the activities overview, which un-hides the dock & workspace switcher, shows open windows Expose-style, and can be used to find and launch applications not on the dock. It can be activated from the keyboard.)
I don't think GNOME Shell is perfect, and am dreading the introduction of application menus integrated into the top panel; but of all the complaints I've had, I've never once thought "this is too cluttered" or "this takes up too much screen space".
Re: GNOME 3.4 continues to polish GNOME 3...
Have you actually used GNOME 3? One of its major selling points is that unless you're actively using it to launch an app or manage windows/workspaces, it pretty much gets out of your way and stays there. There is only a top panel, unlike the default top *and* bottom panels in GNOME 2; the systray and notifications are only visible when something happens or when pulled up manually; by default the file manager does not run in the root window, so there are no icons on the desktop at all.
Your complaints about "most major desktop interfaces" sound, to me, like exactly the sort of issues the GNOME Shell team are trying to address with their design. Give it a try, you might like it.
Re. WHO checks in their actual location???
If you have an Android phone, open up Google+ and switch to the "Nearby" stream. It's just... depressing. Around here, at least. So many people seem to think it is a good idea to publicly advertise where they are and what they're doing, to complete strangers.
I'm surprised there hasn't been any backlash about the stalking potential, as there has been with other similar services - or perhaps there has been, and I just haven't stumbled across the articles...
I think the reference to Base64 was intentional - the point being that this "steganography" technique appears to make it so obvious that there's a hidden message in the text, it is no better than using pre-existing bog-standard encodings to carry your raw binary data (encrypted or otherwise). This new technique is not steganography in the same way that Base64 is not encryption.
Disclaimer: I haven't actually read the paper. For all I know, the article may be an over-simplification, and there may actually be something to it.
Is it bad...
... that I correctly predicted two things from the headline alone? Thing the first: that it would be a Matt Asay article; thing the second: that I would disagree with it.
Put that pint down right now
You talk of bottled ales and prices in pound sterling, yet claim Tesco have "a ways to go"? Get out of my local, you dirty colonial!
Except they're not
If I understood that Opera blog post correctly, they're basically saying that their browser does perform CRL and OCSP checks, and does "downgrade the security level" of a site if revocation checks cannot be performed (in contrast to the default settings in Chrome and Firefox, which carry on regardless if a revocation check cannot be performed).
However, all that happens when a site has its security level "downgraded" is that the address bar doesn't show a padlock. This doesn't actually stop people from using the site anyway, or stop scripts from running, etc. - so, basically Opera might as well do nothing at all.
They're all currently worse than useless (at default settings, at least - Firefox *can* be configured to treat OCSP responder failures as fatal), but the solution is simply to make CRL and OCSP checks mandatory, with browsers throwing up errors if the checks can't be performed. Google are re-inventing the wheel, when really all they need to do is take the one they've got and sand it down a bit so it rolls more smoothly.
"Because OLED panels refresh around 100x faster than LED panels (response time is quoted at just 0.01 milliseconds), left/right screens don’t overlap and double imaging is avoided."
Surely the lack of crosstalk has sod all to do with the refresh rate, and more to do with the fact that instead of a single screen rapidly flicking between left/right eye images, there's a separate physical screen for each eye? Don't get me wrong, a high refresh rate is a Good Thing, but it has absolutely nothing to do with the quality of 3D images in this instance. I also nearly face-palmed when I read the bit about not being able to look at a controller whilst wearing them, as per several other commenters. Please can we have this sort of kit reviewed by someone who actually knows what they're on about in future?
obvious troll is obvious. I hope.
"Oh, and @mangobrain, sticking GnuTLS on a server won't help."
Not on its own, no - that was in response to Ken Hagan's "how hard can it be" question. Although, in a rare twist, IE8 and 9 supposedly support it on the client side as well - albeit not enabled by default (at least, not on my machine). OpenSSL support is definitely needed.
How hard can it be?
Well, if you're administering anything which uses OpenSSL, the answer may be "very", seeing as OpenSSL only supports up to TLS 1.0. There's GnuTLS, but switching to it fully requires the webserver to support it and be compiled to use it. It has an OpenSSL compatibility layer, but I don't know how well said layer works.
TLS 1.1/1.2 support needed at both ends
The problem here is that the server side also needs to support TLS 1.1/1.2, which OpenSSL - probably used in the majority of Apache HTTPS servers - doesn't. If the server only supports up to TLS 1.0, then whatever the client advertises support for, the version will end up downgraded to 1.0 as part of the initial negotiation.
However, since the attack only works with block ciphers in CBC mode, there's a second work-around that could easily be implemented: if the server responds that it only supports TLS 1.0, abort the handshake and start again, prioritising a stream cipher (of which RC4-128 is the only viable option in TLS 1.0, AFAIK). Unfortunately it would have to involve disconnecting & reconnecting, since the client outlines its supported ciphers & their priorities in its opening message (i.e. before it is known which TLS version the server wishes to use) and I think only servers can initiate a renegotiation in-session, but it could be done.
Walled garden application stores are exactly the kind of thing the "anti-tivoisation" clause is aimed at. If I create a program or library and release it as open-source software, I may not wish it to end up in use on devices which don't permit end users of those devices - myself included - to modify the in-use version. Accordingly, I may release it under the GPLv3. If you don't like that, don't use my software.
The GPLv3 is incompatible with the App Store because the App Store is a closed system. Many would argue this is a failing of the App Store, not the GPLv3.
From the GPLv3 itself:
"If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information."
The definition of "Installation Information" includes "authorization keys" required to "install
and execute modified versions of a covered work in that User Product". Note, however, the following exception:
"But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM)."
So manufacturers have two obvious options here, excepting "don't use GPLv3 software ever":
- Install firmware in ROM, or some other non-rewritable medium. Updating firmware becomes difficult, but there are still some classes of embedded device which already lack support for this.
- Don't include any GPLv3 software inside signed blobs, where such blobs are on rewritable media. If the DRM or other security-sensitive software they're using (including the libraries it consumes) is either proprietary, or under licences which lack this clause, then they can still sign those bits. Just don't sign the whole system image, if the system as a whole contains software under the GPLv3. In an Internet-enabled DVR, for example, that might leave me able to replace the integrated web browser with one of my choice, whilst still not touching the video decoding parts.
A Linux kernel licensed under the GPLv3 wouldn't automatically become unable to run proprietary software, because - kernel modules aside - you don't actually link to it. Such a prohibition would certainly be against the spirit in that instance, anyway.
The fourth option is to design better DRM ("better" in the sense of "less susceptible to circumvention"), for example mandating that it be implemented in hardware rather than software. This doesn't make it impossible, but then again it isn't impossible at the moment, as has been repeatedly demonstrated.
Separation of computation from presentation
"Now that I'm using it on a regular basis I'm amazed that we didn't come out with the idea of presentation/computation separation much earlier in the history of computer programming."
You don't have to use "HCJ" (as you insist on calling it) to do this. Two such technologies I've used are GtkBuilder (recent, but its forerunner libglade has been around for a while) for GTK applications, and XRC for wxWidgets applications. These are not the only solutions. Their primary advantage over "HCJ" is letting you write native desktop applications in a choice of compiled or non-compiled languages.
GUI development doesn't begin and end with Visual Basic, thankfully. Of course the technologies I'm talking about are layers *above* the "raw" toolkit APIs, because under the hood one still needs to be able to construct GUIs programmatically; how else would these things be implemented? The real issue in Windows land is simply that Microsoft has, prior to the advent of XAML, never come forth with any first-party technology for doing this. Even XAML muddies the waters by requiring you to use a .Net language (whereas GtkBuilder and XRC can be used from any language with GTK or wxWidgets bindings), with everything that entails, and its penchant for being compiled into binary files and inserted directly into assemblies.
This is not irony, this is things going according to Google's plan. Donate resources to educational institutions so that students get hands-on experience with your platform, then reap the rewards.
"... but there's no reason why the technology can't be adapted to large TVs."
Except, of course, that TVs are often watched by more than one person. Good luck tracking multiple pairs of eyeballs and applying multiple image adjustments with only one physical screen.
The title is required, and must contain letters and/or digits.
IPv6's must-have service
How's about "continued access to the internet"?
Once all the v4 addresses are gone, they're gone. This doesn't just mean that home users will have to pay through the nose for a static IP from their ISP, but there won't be any more public IP addresses to give out, even to businesses. The internet will either stop growing, or the usage of BIG HACKS - such as using swathes of non-private IP address ranges behind dirty great NATs - will increase to the point where nobody can reliably route traffic to anyone else.
IPv6's killer app is the continued growth of the Internet. What more does it need?
Rumours of incompatibility greatly exaggerated
So businesses with IPv6 only infrastructure simply won't be able to communicate with the v4 web, and vice versa? Says who?
Since IPv6 can safely encapsulate the entire range of IPv4 addresses as a subset (IPv4 mapped IPv6 addresses), it's fairly trivial for a dual-stack host outside the business's LAN to take v6 traffic and forward it to the v4 web. Also, since IPv4 can be used as a link layer for IPv6 (tunnelling), I can set up IPv6 inside my LAN, tunnel IPv6 traffic to a dual-stack box *outside* my LAN *using IPv4*, and have the tunnel endpoint forward it to the v6 web for me. There are various tunnel brokers already in existence offering this service, including some offering it for free.
The transition won't be trivial, but there's really no need for it to split the web in two until everyone's 100% switched over. Also, tunnelling between IPv4 LANs over the IPv6 web will be happening for many years to come, to support the mountains of legacy v4-only software (game servers in particular).
To be fair, on second reading, maybe my post was a bit overly harsh. I'm disappointed that the proprietary drivers don't seem to work, but also understand that this isn't the GNOME team's fault - I'm not the only one, though; a quick Google search turns up this bug report, amongst others (although a corrupt activities bar is only the start, for me): https://bugzilla.novell.com/show_bug.cgi?id=685691
I have an HD6950. KMS and 2D work fine with the open-source drivers, but not 3D acceleration, and I have to venture beyond my distro's mainline packages to get that far. Admittedly I had to do this to get GNOME 3 at all, but other distributions will probably be making the switch sooner, Fedora being one such example. I use Gentoo, so I'm used to having to fix the occasional bit of local breakage).
I understand that the situation with high-end graphics cards on Linux is often sub-par, because it's a game I've been playing for a while now, but that doesn't mean I enjoy it. If I could get the shell to work with software 3D, I'd happily live with a bit of slowness until one of the two drivers allows me to run it "properly", but it's quite frustrating being simply unable to run it at all! I've been following the shell design & mock-ups for a while, and haven't been entirely convinced by the screenshots & videos I've seen, but I really do want to give it a chance. At the moment I'm left wishing I hadn't bothered, because the fallback session just feels like GNOME 2 with all the useful bits removed.
Annoyingly, I've seen mailing list posts about getting the shell to work with llvmpipe, but they don't appear to have amounted to much - or the necessary work hasn't been merged into Mesa's master branch, perhaps.
Haven't tried an F15 live image - might do so when I get back home later. I have an nVidia card in my machine at work, but I'll be holding out a bit longer before putting GNOME 3 on that one, since ... well, I need it to be stable for work.
GNOME Shell on ATI = fail
I don't quite understand why I don't see more people complaining about this. I have an ATI graphics card, on which the open-source drivers don't support hardware 3D acceleration (whichever part of X or Mesa is responsible falls back to using llvmpipe, which causes GNOME 3 to drop to the fallback session), and on which the proprietary drivers suffer from texture corruption which gradually renders the shell less and less usable the longer it stays open. Am I honestly supposed to believe that every GNOME developer has either an Intel or nVidia graphics card? Is there anything I can do to help, bearing in mind I haven't the faintest clue how to write device drivers?
Also, I'm not sure why they bother providing gnome-tweak-tool with the option to re-enable desktop icons, considering the option doesn't actually appear to work. Not for me, anyway.
I've steered well clear of PulseAudio and NetworkManager for the past few years, having heard lots of complaining about the former and having personally failed to get earlier versions of the latter to work. However, I bit the bullet and installed both along with GNOME 3, did a bit of reconfiguration, and am pleased to say they actually seem to have matured a bit. Hopefully, with time, I'll be able to say the same about GNOME Shell. It would be nice if the distributions stayed away from all these newfangled technologies until they actually work out of the box.
Event-driven "or" multithreaded?
Certain people here, including - it would appear - the author of TFA, seem to think that the term "multi-threaded" is synonymous with using blocking I/O to service one session per thread, whereas "event-driven" is synonymous with using a single thread to service multiple sessions via the magic of non-blocking I/O. I hate to break it to you, but all "multi-threaded" means is that a process spawns multiple threads, it doesn't have to dictate an entire server architecture. In a similar vein, if when you say "event-driven" you really mean "uses non-blocking I/O", then just say so.
The blocking I/O, one-session-per-thread architecture began life as one session per child process, on OSes where IPC doesn't suck, and fork not only exists but provides decent copy-on-write memory sharing - i.e. not Windows. It seems to have become fashionable to use the same architecture, but with threads in place of child processes, mainly because of a certain OS's handicapped notion of child processes. Oh, and because we're always told threads are difficult to work with, so being able to make them work must make you a l33t h4x0r, amirite? OK, so in-memory message passing can be faster than true IPC, but the genuine potential benefits of multi-threading don't seem to be the main reasons why people use it.
On the other hand, a *single-threaded* event-driven architecture will very quickly stop performing if you actually need to do anything CPU-bound with your data, as opposed to simply shunting it from source to sink. Notice, though, that at no point have I mentioned multi-threading (using the real definition, not the article's definition) and event-driven concepts being mutually exclusive - you'd just better do it that way from the ground up, because bolting multiple threads onto existing single-threaded code is no less a recipe for disaster than trying to turn blocking procedural code into non-blocking event-driven code. I'm not saying it's easy to get right, either; you can find yourself bogged down by the overheads of things like context switching and message queue locking very easily. Don't try and do it on Windows, either, because IME it just doesn't have the APIs to do the approach justice.
Nowt wrong with a bit of C++; if you can learn to use the POSIX socket API, your understanding of pointers and memory management ought to be up to snuff for most things you'll need to do in the real world. (Bloody sockaddr structures, bane of my existence.)
Oh, and I'm only in my 20s, so hopefully there is *some* hope for future generations....
The title is required, and must contain letters and/or digits.
As interface designer, Mr. Raskin didn't necessarily write a single line of UI code. That would be the job of a programmer, not a designer. So, unless the bugs of which you speak are actually not bugs at all (being instead intentional behaviours of a badly broken design), Raskin's departure has nothing to do with it.
I hope you aren't serious...
The "type of security" of which you speak is actually no security at all in this case, and undoubtedly many others.
- Analysis iPhone 6: The final straw for Android makers eaten alive by the data parasite?
- Stephen Pie iPhone 6: Most exquisite MOBILE? No. It is the Most Exquisite THING. EVER
- First Crack Bloke buys iPHONE 6 and DROPS IT to SMASH on PURPOSE
- Early result from Scots indyref vote? NAW, Jimmy - it's a SCAM
- First Fondle Register journo battles Sydney iPHONE queue, FONDLES BIG 'UN