36 posts • joined Wednesday 1st September 2010 16:20 GMT
cobol is for adding up accounting info, and dabatase operations.
COBOL was able to add up numbers of arbitrary length (directly in decimal, usually) while others were stuck with 16-bit ints. Even 32 bits only gets you to $20M (with pennies). So it can do 'hard sums' in that sense.
But, go write a COBOL procedure which finds the inverse of an m x m matrix, with variable m, and report back to us how it went. And if you succeed, let us know how fast it runs relative to the C or fortran version.
Need more options
[x] don't try to thumbnail/summary content on optical drives unless asked.
[x] likewise network drives
[x] don't try to thumbnail image files which are, say, 15k x 10k pixels. Kill thumbnail process which consumes more than X ram.
[x] disable the feature where thumbnail/summary threads hang and hold files open.
[x] don't look inside zip files when searching files (unless I say so)
[x] don't even pretend you know what a zip file is.
[x] don't do stupid magic things (like deleting an html file when I delete the directory alongside it) without asking first.
[x] don't obey any drag &drop command which took less than 0.4 second to execute
[x] don't do 'undo' without asking if ok, if the action being undone was more than 20 seconds ago.
[x] don't do 'paste' without asking if ok, if the cut or copy was more than 20 seconds ago.
[x] don't create shortcut on drag/drop based on dubious criteria
[x] produce a suitably humble apology with full and proper explanation, when I try to rename something to a reserved name, like "Con.Air.avi"
[x] don't claim that filenames like ".foo" are not allowed.
one nice thing that Tcl can do....
Tcl keeps the entire interpreter state in one (non-global) variable, thus allowing multiple, independent interpreters to be run in different threads. Maybe this is why it's a good choice for this app.
I don't think this is at all possible for python, unless you do a *lot* of recoding. You can 'bless' threads to share the same interpreter with other threads, but that's not the same thing.
thanks for that answer ...
I think I know what u mean, in that since I've done a lot of python coding, I get very very frustrated with C++. Not because it isn't python -- that would be silly -- but because from the perspective of using python I now clearly see all the ways it's totally falling short of what it (C++) is trying to do with templates and so forth.
I don't really know perl but I think that python has the goods for extensibility and introspection. You can, for instance, fairly easily generate functions in the form of strings - ordinary source code - and then compile them and call them .. not quite as transparently as in Tcl (or lisp, for that matter) but not at all hard. This is a really good speedup trick in some cases, but it's difficult to imagine a case where this is the only reasonable option. In Tcl of course, executing a string is business as usual
vicm and colinmcc - yes but
What I meant is that, 'how it works' appears to a novice like you are tricking the language, e.g. by reprocessing a string in some non-obvious way to get a certain result. That's how it works, but it means that some ordinary, everyday Tcl things look like strange tricks to the learner. In other languages you don't have to get strange simply in order to do fairly ordinary things (which is not the same as saying you *can't* get strange if you want to). Only stated as my opinion.
And I feel that pretty much everything you say about TCL, run-time morphing, do-anything powerful, can also be said about coding directly in assembler -- just replace the word 'string' in any overall description of TCL with 'byte, or group of bytes' and you could easily end up with a description of assembler (try this: absolutely everything is a "string" and the programmer controls exactly how each string is treated by the program). Except of course that TCL has garbage collection and much better behaviour when things go wrong, and is not tied to a specific processor. So it's definitely better than assembler, but in terms of general usability it's a good way off in that direction IMHO.
Question for Tcl fans... why?
Do people who choose to write complex apps in Tcl have no idea what other roughly comparable languages (e.g. perl, python) are like? Honestly trying to not make this a troll.
Not to take away from all it can do, but it's so strange to use. In my limited use, when I write any Tcl at all (other than simple unconditional commands) I find I'm often trying to trick the language into doing what I want because it seems the 'normal' way doesn't ever work (or I find out after, that the trick which worked is in fact the normal way for Tcl). I can certainly see how, if you've put in the steep learning curve to be good at Tcl, you'd then want to put that to use. But when you choose it mainly based on that investment bias, it doesn't mean you get to call it a good language. Am I being unfair?
who said Google invented multi-language browser scripting?
The difference is that it's running in a NaCl box as opposed to a DirectX plugin, so it's both platform independent and inherently trustable (according to how much you trust the NaCl sandbox, anyhow - but no need to put extra trust in the TCL interpreter or the code it's running).
as compared to the massive waste of silicon...
... implicit in being legacy compatible with pentium, 386, 286, 8086, 8080? At least the GPU will be powered down.
what point are you making?
(a) I would not be at all surprised to find that facebook is being used somewhere as literally a textbook example of an application that parallelizes well. What % of facebook users do you ever see any information from on your page?
(b) regarding I/O bottleneck - the trick is to distribute I/O - network and disk - among the processors. This has been thought of.
they don't have the honesty to say it but
...based on your answers you did not share their "core values". I.e. you were not sufficiently Xtian for them. Good for you.
and recycle hardware.
All those 4-5 yr old XP machines which won't upgrade to Win7? Most will run ubuntu fine. And they have decent processors, USB2 and DVI out and all that (probably SATA even) so you'll be fine for schools.
Yes, good news, bad news.
Good news: Lots of nice small, cool, RISC-powered netbooky things will be built (which we'll be able to run Linux on, without blubbering about software compatibility). Still an MS tax but not the intel tax (some of which goes to thermal widgetry). Bad news: Microsoft Windows for Microwave Ovens(tm).
Remember that computer that had a Z80, 6502 and 6809 in it?
No, neither does anyone else. But there was one... Yes this is possible. Doesn't make it a good idea. It's bad enough having to support what are essentially three different processors in the intel CPU (if you support x64). It's technically far better to find ways to make software more platform-independent, there's a lot of interesting work being done in that area. Infinite backwards binary compatibility has been a cornerstone of the Wintel approach but has seriously messed up the platform.
Incidentally, a few years ago I learned from an intel presentation that they had put a 1 GHz RISC at the center of a four-core Pentium die in order to do thermal management (i.e. mess with the clock speed and voltage to keep the Pentiums on the happy side of meltdown). They didn't say what it was, but I hear that intel is an ARM licencee...
NT on Alpha/PPC
But was NT on Alpha/PPC intended to go anywhere? or was it just to keep intel in line (and prevent them from enjoying MS monopoly by proxy)? As soon as AMD had a viable alternative, MS no longer needed the threat of retargeting their OS.
Remember about the same time MS came out with a 'posix subsystem for NT' which was buggy, incomplete, basically useless, and obviously just a ploy to get a 'posix compliant' checkmark so they could bid NT servers on certain contracts.
Likewise this ARM port could be a way to put pressure on intel/AMD to build much better low-power x86 chips, but I don't think so. If they coulda, they woulda.
Hey, ARM-based desktops will great for smartphone developers...
for 3 millisec
From info above it takes 3 msec to reach mach 5 at 60K G's. Give it enough mass (relative to the projectile) and a suspension system and it shouldn't be too hard to keep it sufficiently still during that time. Dealing with the reaction from the acceleration force is hugely greater than any issue with motion of the platform, is my guess.
right, but does it actually work?
Ok, but if you do this, can you determine the amount of free space on the drive? Can you defrag it? Will applications give you wrong information about the space available? The 'normal' Win32 API for getting free space, AFAIK, specifies the drive, not the directory. I think this capability was available decades ago in DOS, but it was done basically as a last resort and it did make it impossible to find out basic information about the mounted drive. This is the point: the POSIX file system semantics (hard links, sym links, etc) and the two-level view (block device vs. file system) have been nailed down for decades, and for decades all of the utilities (especially tar, du, cp, find) have known about them all and are able to deal with them properly. By contrast, Microsoft 'threw in' things like drive mounting and 'subst' and failed to support them properly with APIs and utilities. Example: NTFS added 'access time' to file system. and Windows made it useless since the only way to read it via gui -- the 'property' display -- causes the 'access' time to be set to 'now'. Fail. Example: NTFS allows hard links, and files named 'Aa' and ''aA" in the same directory (via ill-conceived 'posix' subsystem API) but if you do this you've effectively corrupted the FS since basic file utilities will be baffled by it. (I have often, without intending to, created files in NTFS volumes using cygwin which cannot be copied or deleted except using cygwin. I'm not even sure how). System features which aren't properly supported will get little use, and if little used, real support won't arrive. Not useful features then.
What if they have a 'light OS' on the desktop and the data and app are on a company server(s) rather than a cloud? Security problem fixed. Backup sorted, too. Roof leaked all over your 'computer'? Here's another one, get back to work. Do you have any idea how many of those 'indispensable' Windows desktops are basically just running a browser (plus antivirus of course)? Not all of them, by any means. But a lot. 60% might not be too high.
20 year old code in linux vs 20 year old MS code
20 year old code in linux (or older) is, by and large, still there because it was done properly 20 years ago. 20 year old microsoft code is, in many cases, still there because they mismanaged the migration/compatibility process coming up from the mud of CP/M and DOS, and can't fix it now without breaking things that are far newer than CPM and DOS. Try creating a file called 'con.txt' or even 'Con.Air.divx.avi' on your windows machine. The reasons why you can't do this represent a pretty big fail. And, it's too late now: many (even brand new) apps need to know certain names are impossible to avoid being DOSd, so MS can't fix it without breaking all those apps. This is just one example. Also, drive letters. Drive letters? in 2010?
how to check correctness
(1) define correctness in a spec (2) review the spec, including publishing it (3) implement code which checks correctness according to that spec (4) publish, review, test, and verify the code
NaCl is not your worst problem then.
If you can find a way to escalate privilege using the restricted "whitelist" instructions that are allowed by NaCl, then you've got a serious problem and the CPU is to blame. NaCl won't even load the program if it contains non-whitelist instructions.
ActiveX insecure by design
The difference is that ActiveX is insecure by design - once the control is 'trusted' -- which has nothing to do with what's in the code -- anything is possible. It may be that no implementation is perfect, but that's much better than no protection at all. Also, it's all open source (unlike Active X trust code) so problems will get found much faster.
Native Client Instruction Filtering
Many of the X86 exploits (e.g. F00F bug) have relied on unusual instruction encoding; the many prefix bytes allow for many combinations, some of which are legal. The F00F bug is an illegal use of LOCK prefix which is supposed to trap, but on certain silicon the hardware steps on its own shoelaces trying to trap it, halting the CPU. The Native Client approach is to give you a white-list of instructions, which excludes a LOT of instructions; you can't execute any privileged instruction, you can't use an instruction that changes a segment register, you can't use an instruction with more than one prefix byte; you can't use vector ops which are not enabled on your target (some CPUs which don't support SSE ops will ignore the prefix and execute an MMX op instead, potentially dangerous). Basically the rationale is that intel hasn't provided a trustworthy, fully functional illegal-instruction trap mechanism, so rather than rely on it, NaCl implements a fairly restrictive whitelist. The loader scans your code in advance to ensure there's nothing bad in there before running any of it.
This means you need a NaCl-aware compiler/linker, of course.
Another other aspect of NaCl is that the hosted program cannot use the underlying native OS APIs, since that creates hazards and makes the NaCl container OS-specific. Instead there is an isolation layer and special NaCl API for I/O.
Hardware virtualization is something which is essential in x86 world, specifically for microsoft (since there you are trying to install an OS from a DVD, not one that you built yourself) and is sure handy for installing linux from a standard coaster too. But you shouldn't need it on an ARM server. Without hardware virtualization support, the OS can 'virtualize' storage over the network in several ways (ATAoe in addition to normal network shares like nfs) and can also 'virtualize' whatever display/console technology you need over the network (for a server, that might as well be just sshd, but could be vncserver too). This doesn't support running more that one virt, machine per CPU, but who needs to do that when the CPU is so small and cheap? It you want a more 'physical' virtual disk, you could make a chip that connects to the PCIe and emulates a SATA controller, tunnelling the traffic over some fabric to the actual disk (like that weird 256xatom box that was announced a month or two ago). The whole Wintel development history has been a story of new hardware conforming to existing software (including, e.g. every modern display adapter can emulate all the VGA/EGA/CGA cruft from the 80's) -- and to some extent we've all been trained to expect that and see it as normal -- but the way it's done elsewhere is you allow for a bit of system coding so the software can adapt to new hardware. It will result in a more efficient system, compared to hardware virtualization.
double legacy bomb
What about all the sites that check the browser agent ID and say "oh, IE - take all *this* deranged JS/DOM instead of the standard stuff" (The smart ones are testing by running some code and seeing if it raises an error, they should be OK). But are they going to change the agent id to make it not look like another IE? or continue to support all the non-standard 'ie classic' JS/DOM as well as the standard stuff? Urgg....
CSM Usually very neutral, I have also found
But I ran into this recently:
(1) http://www.csmonitor.com/USA/Politics/The-Vote/2010/0830/Glenn-Beck-rally-attendance-calculating-how-many-really-showed-up --- neutral, and reports results of an actual aerial survey, and not just the stupid wild-ass guesses.
(2) http://www.csmonitor.com/USA/Election-2010/From-the-Wires/2010/0924/Rally-to-Restore-Sanity-Bigger-than-Glenn-Beck-s-rally --- later article pretends the first one didn't exist; doesn't mention the actual survey, gives usual nonsense 500K and 300K-325K crowd count; and gives a reference for the latter, which turns out to be a tweet of 3rd-hand unofficial SWAG made on the ground, as confirmed (and disclaimed) 13 min later by the second-hand source: http://twitter.com/DomenicoNBC/status/22364380399
But yes, on the whole, far more balanced and neutral than Fox. But, you know, so is al-Jazeera.
Now it works, on friday it didn't.
(the second link)
My comment is dated 13th presumably because that's when it was approved.
"can be read free here": no, it can't.
For flat surfaces: apply tissue over a strong, open wooden frame, and dope it. When that's dry and flat, place model on it, after applying glue to the model struts where they contact it. When that's dry, cut tissue away from frame and neaten up the edges. There should be little or no excess tension in the covering that way (I'm guessing the root problem is that the tissue deforms the model frame as it shrinks; since the doped tissue is hardly elastic at all once it's shrunk). This method wont' be so good if the frame doesn't present a really good planar gluing surface.
For simple curves like the top of wing you may be able to cut the shrunk tissue from the frame, and drape it over the wing with weights to keep it taut while it glues.
And as someone already mentioned, make sure none of the struts are sealed or it may be destroyed by pressure changes...
The whole point of a RISC is you don't need 'masses of silicon' to get things done, it just needs to be the right silicon. It also helps if you are designing the hardware before the OS that will run on it is already built, and therefore don't need masses of silicon to emulate legacy behaviour in every absurd detail.
NT on Alpha/PPC
I now wonder if NT on Alpha/PPC was done solely to prevent intel from thinking they were getting MS' monopoly by proxy. As soon as AMD had a credible alternative CPU this game was no longer required. I eval'd an Alpha machine once with NT 4.0, it was weird because it had IE (and I ran pov-ray on it) but no dice getting a Netscape build for that. It also ran the 8086 code in VGA BIOS roms, by emulation, so that video mode switching would work. See intel? that's how you deal with old binaries: run them, but no faster than before, so they will go away sooner and not plague you for decades with layers of legacy garbage. Of course then you can't sell a CPU into a market which wants to move their entire OS and apps unchanged onto the new machine and have it go faster.
Yes, confused. So?
Yes people have been confused about this since the days of '16-bit' (8088) and '8-bit' (8080) software, since both of those machines had 8-bit data bus, and 8-bit and 16-bit registers (but the 86 squeaked in with a few more 16-bit arithmetic ops) And the 68K was 24A/16D on the pins.
So what's your point? A convention that was barely meaningful when started in 1982 needs to be still followed? Bear in mind you still don't need 64-bit arithmetic for much, but 64-bit addressing is of critical importance for getting more ram online, and if you have 64-bit user addresses then you need 64-bit arithmetic to calculate addresses. If you have 64 bits only in the MMU, you can run a vast range of apps sharing terabytes of PM as long as each one doesn't need more than 4G of VM. Like everyone running all those x86 apps under Win64. So, at this point in history focus has shifted from the size of the data registers to the size of the address (I could also point out that a fair number of '32-bit' processors have had 64-bit data busses, and 128-bit vector registers supporting 64-bit arithmetic).
Bottom line, people will continue to be confused about what ' 64 bits' means wrt to CPUs, unless it's provided with some other info, e.g. AMD64, or iA64, or '64-bit physical address'. Do you think it would be a good thing if 'Oh, it's got a 64-bit CPU' actually told you everything you needed to know? Confusion is OK when it forces clarification.