Whats's the point?
Wonder why would the force HTTPS? Just for foolish consistency? In the case of XML schemas etc. published by W3C, there are no secrets, they are maximally open data by definition.
1981 publicly visible posts • joined 18 May 2007
> In my day we had 7-bit ASCII and maybe EBCDIC
At least you had brackets and backslashes, you rich bastards!. When I started, most Finnish terminals used a hacked ASCII where [\]{|} was replaced by ÄÖÅäöå. Have fun writing C in it.
if (strcmp(line, "fooÖn") == 0) ä
arrÄiÅÄjÅ ö= 1;
å
Some ADM3A terminals in the Helsinki University of Technology had a little retrofitted switch that flipped between ASCIi and the Finnish character set ROMs. If you used it, the text on the screen would change instantly between brackets and letters.
If you want to get a feel of what installing Linux was like with the very earliest Linux distros, such as SLS, installing NetBSD is a very similar experience. I have occasionally done it for nostalgic reasons, but never used it for long before leaving it to the shelf (many of my machines have some NetBSD VirtualBox VM sitting around for this reason).
One problem with NetBSD (and an itch I should scratch some time) is installing non-English keyboards and UTF-8. You can ask for a Finnish keyboard in the installer, but affects only the bare console and has no effect in the X11 session. This needs a separate setting (I think it involved XDM setup files).
(Icon? There wasn't the nice BSD daemon in sneakers available, so I had to use that)
I usually keep a classic swap partition just in case. With a 8Gb machine, it is almost never used (one can check this with "top" for example), and consequently does not cause any performance degradation.
However, there is one situation where it is useful: You have some modern piece of desktop bloatware with lots of data open, and you wish to use something else for a while with the intention of getting back. So you launch the other program, things are sluggish for a while (swapeti-swap) but then you can do your thing in the other program and then return to the original bloatware. Again things are slow for a while, but you do get eventually to a state where you get work done, after the necessary pages have been restored from swap.
In other words, you swap from one big task to another and back. Like the name "swap" says. If I did not have a swap partition, the dreaded OOM of Linux might have decided to kill the first program. (Or something else).
Of course, the idea of using swap to run two active, huge programs in an interleaved fashion does not work nowadays, you might as well be running Babbage's mechanical computer.
CP/M was originally written for the 8080 (no Z80 back then), and even after Z80 became available and popular, the OS and utilities, and all older programs written for it ran on 8080 (and the 8085, which was practically identical from the programmer's point of view). Later many applications started requiring the Z80 with its extensions.
Even as late as 1981 Nokia introduced a CP/M computer, the Nokia MikroMikko 1, that was built around the 8085, and CP/M most applications still worked on it.
Interesting. If I understood correctly, the distribution (either SpiralLinux or GeckoLinux) primarily consists of a better (or at least more user-friendly) installer, and the rest is like in the base? Not a bad idea. We don't really need more separate distributions, unless they implement something really new (most don't).
Isn't OS/2 also still around, now named eComstation? Although that is based on the OS/2 Warp version that is very different from the original 16-bit OS/2 that was released in the eighties.
Arguably all the various BSD versions also date from that era, being derivatives of Berkeley Unix.
I wonder if it just needs a kind of feedback loop that would stimulate it when not talking to a human. Or continuous inputs from the environment like we have. Come to think about it, the AI is in now in a kind of sensory deprivation tank, a state of affairs that is known to make humans crazy if they stay in the tank too long...
I don't know the actual details, but I don't think it it just has wires going to some hydroelectric plant. Far more likely it is in the local "Fingrid" system like everyone else here, and uses hydro just in the sense of paying to hydroelectric companies in preference to other types of energy suppliers. Fingrid gets its power from a mix of sources, you can see a fascinating near-real-time display of Finnish energy generation broken down by energy sources (nuclear, hydro etc), consumption, exports and imports here:
https://www.fingrid.fi/en/electricity-market/power-system/
With the need of Russian Federations raw materials to actually make the chips,
There is really no resource in Russia that cannot also be found elsewhere. And chips do not require large amounts of raw materials anyway, they are a rather extreme example of a product where all the value is created in the design and manufacturing.
The requirement for faster hardware and more RAM comes from the inefficiency of the code that's written.
Amen, brother. My most hated bloatware is a local application that is essentially a search-and-replace that can handle a bunch of files, including ones in zip, gz etc archives. Somehow the implementers had managed to make it require both Java and Python scripts, and deliver it in a container that naturally contains a particular version of both run-times. And of course it is slow and pretty hard to use. It probably could be reimplemented as fairly straight-forward Perl script. But it is not in my department.
But it must be said most problems do become far easier it you can assume at least 32-bit address space, so even though I like efficient, non-bloated solutions, I do not pine for the 16-bit days. Been there, good riddance.
Data meant to be accessed by some unknown people (or other beings) in the future surely should not be encrypted, and It should be encoded in as straightforward way as possible.
As for explaining coding etc, I think such really-long-term storage must be accompanied by material that bootstraps the deciphering from the basics. Like explaining binary coding at elementary level (01 = o, 10 = oo, 11 = ooo), then ASCII A = 0100 0001 B = 0100 0010 ...
Of course there is the risk whoever finds your carefully prepared optical disks will use them for jewellery... But less likely if they are on the Moon, because stone-age level people will not get there.
Linux (as a desktop or server) is for people who like to have a choice. Not some BigCorp telling them what the next OS version is like. Don't like the direction Fedora is going? Use some other distro. Actually even this is not usually necessary, because you can customise Fedora to you liking. I mostly use Fedora, but hate the Gnome desktop, so I use XFCE (conveniently, there is a ready Fedora "spin" with this, but even if there were not, I could install XFCE myself).
Contrast this with the moaning one heard from Windows users whenever MS makes some change existing users do not like. Sometimes the moaning causes MS to change its mind, but usually not, and you just have to swallow the new version. (And there is no alternate Windows "spin").
As to the huge number of distros, most of them do not matter for a beginner. They tend to be derivatives of the three or four big ones. (Usually born of someones dislike of how the original distro evolved! Or a pet customization of it). I would tell a beginner to pick well-known one with a good user community, learn to use it, and then, after maybe a year, perhaps sample others, if interested in alternatives.
I have used it largely for handling NTFS-formatted removable media, which i often find has to be readable on Windows as well. Also for accessing a Windows partition years ago when I dual-booted (no longer doing so, as VM:s are a more flexible solution, so I now give the whole machine to Linux). Never problems with these tasks, but I agree more extensive usage might expose problems.
Not blessed by MS is not so relevant. The popular Samba file server software isn't either, it was created by a combination of reverse-engineering and reading incomplete MS documentation. MS is not in the business of helping open source interoperate, I don't expect them to ever help any NTFS project, at least not on terms that would be acceptable to the Linux kernel guys.
Sounds like the author had not heard of NTFS-3G, which has worked pretty problem-free for years. Mounting, reading and writing NTFS works just fine, and it comes with all major distros. OK, it is a FUSE-based system, but that is transparent to users, and the performance is quite OK, unless you for some reason want to use it as your main FS (and why would one want to do that on a Linux system?).
I also wonder if they will run out of cellular phone numbers if the disposable tags connect via cellular networks and there are millions of tags. Of course you can reuse the number if the tag is disposed or otherwise rendered inactive, but you don't know when that happens, unless the end-of-life is managed somehow. The deposit you propose would be one good solution.
Just means some day Fedora will not work on old machines. But I'm pretty sure there will be other Linux distros that will. Probably even a Fedora fork or respin will, as some Fedora users will decide they will not give up BIOS and x.org, unless you pry it from their cold, dead hands (I myself might be in that group, x.org still has the advantage over Wayland that it works with all desktops envs, not only the most bloated ones).
Yes, the content has been going downhill. However, one reason Netflix cannot really do anything about is studios setting up their own streams, and taking their content out of Netflix. So no Disney properties on Netflix, which sadly now includes much more than Mickey Mouse (like the Marvel, Starwars and Pixar franchises).
To see all I would like to stream, i would have to subscribe to HBO, Disney+, Amazon, Apple in addition to Netflix. The monthly bill would start getting serious.
Article: "particularly for developers in environments that have standardized on RHEL but prefer Windows for their code-wrangling tools."
A more likely use-case is developers in corporations where everyone is forced to work on a Windows desktop, despite developing for Linux-based environments. Linux does not lack development tools, and it is more efficient to work on the same or very similar system as your target. No "impendance matching" problems with file handling, for example.
That is also my take on what probably will happen. Open source is effectively sufficiently Russian, if it can be supported locally. The point is ensuring you don't become dependent on a foreign company and its servers for updates (and there is no practical way to prevent Russia from obtaining updates for open source), and to ensure the source can be audited.
Not completely true, there are some semiconductor fabricators in Europe, although sadly very few compared to China, USA, or Japan. See https://en.wikipedia.org/wiki/List_of_semiconductor_fabrication_plants
A major lithography equipment maker, ASML, is Dutch (www.asml.com). Stuck to my mind because a fund I invest in has shares in it.
C is low-level, in the sense you can twiddle bits without resorting to libraries or non-standard features, and with good efficiency. And it does not require a complex run-time system to work. None of which is true of languages like Lisp, Java or JavaScript.
There aren't many C standards, except for successive versions of the standard, like with any other standardised language (C89, C11, ...). C leaves some things implementation-defined, which is in line with its low-level nature.
It is "lowest common denominator", because it is implemented for almost any CPU architecture anyone cares for. Not true of any other langauge (Assembler does not count, because it is by its nature totally different for each architecture).
ASN.1 is not a good comparison because it explicitly is designed for defining data for interchange between systems, and for nothing else. You cannot program in ASN.1. You also cannot use it to define an arbitrary data structure at bit level, because the data defined in ASN.1 is BER or DER encoded in the implementation, which uses particular rules and metadata to ensure the receiving end can reconstruct the high-level data. By contrast, in standard C you can use data types with fixed sizes from the stdint.h header to lay out your struct very precisely and portably. Really the only things you cannot define is endianness and padding, but the last can be avoided by ordering the fields of different sizes suitably.
JavaScript is not Lisp by any stretch. It lacks the key innovation in Lisp: using the same simple and elegant representation for both data and code.
(Hmm, looks like I disagree with almost all your statements).
So they are complaining Rust or Swift has to adapt to C calling conventions to interface to the OS? That actually is far prefereable to the alternatives, simply because C is sufficiently low-level!
If the OS interface were defined in a more high-level language, it would be even worse for other high-level languages than the one true language preferred by the OS writers, because of the added complexities related to the language's preferred calling sequence and memory management. In fact, implementers would likely find the OS interface a strait-jacket even for further development of the preferred language itself!
There is precedent. Have you seen any "Lisp machines" lately? They had an OS and CPU tuned for Lisp, which made using something else pretty difficult.
And Lisp isn't that fashionable these days. By contrast, "C machines" are going strong.
> "man true" says there's a --help and a --version option,
It definitely needs adding option --false, to invert the returned status code. Then we don't need the false command, and would allow writing chrystal-clear shell code like
while ! true --false; do
....
done
Thanks for the head-sup, got to see where it is now. I used to use Netbeans happily for JS and later C++, but then the open-sourcing seriously degraded C++ support (probably it included something that could not be relicensed suitably). Netbeans has one feature essential for me that VS Code and many others does not: You can have real multiple windows viewing the same or different file. I just cannot use an editor that lacks this.
I am sure. The 16k and 64k versions were released at the same time. I bought my 64k version (which as noted, actually had only 48k available to the user) from a Finnish importer in 1983. The Oric Atmos was released a year later, and I was pretty pissed because it was clearly superior: Had a proper keyboard and a bug-fixed ROM. What the Oric 1 should have been originally.
Upvote for the Oric 1 mention, it always seems to get forgotten. My first computer, because it seemed a great deal. 64kb of RAM (of course it wasn't mentioned the top 16k is normally hidden under the ROM) and a non-chicklet keyboard (but actually the keys were similar to the ones in calculators, not great for typing). Also introduced the joys of working around unfixable bugs in the built-in BASIC. A great learning experience!
> An attack on them would thus not necessarily invoke a NATO response unless specifically requested bij either nation.
As things currently stand, there would be no NATO response, period. That is why the idea of an actual NATO membership has recently gained popularity at amazing speed in both countries.
From a legal standpoint all this is completely irrelevant, unless Patterson etc own enforceable patents in the RISC-V technology. (I don't know if this is the case). I doubt it. Instruction sets like RISC-V are old hat now, largely thanks to Patterson's work. He wrote a seminal text book on the subject, which I read decades ago and almost understood.
The older predictions do match pretty well:
https://climate.nasa.gov/news/2943/study-confirms-climate-models-are-getting-future-warming-projections-right/
quote: "The team compared 17 increasingly sophisticated model projections of global average temperature developed between 1970 and 2007, including some originally developed by NASA, with actual changes in global temperature observed through the end of 2017. The observational temperature data came from multiple sources, including NASA’s Goddard Institute for Space Studies Surface Temperature Analysis (GISTEMP) time series, an estimate of global surface temperature change.
The results: 10 of the model projections closely matched observations. Moreover, after accounting for differences between modeled and actual changes in atmospheric carbon dioxide and other factors that drive climate, the number increased to 14. The authors found no evidence that the climate models evaluated either systematically overestimated or underestimated warming over the period of their projections."
Sorry, no relief there. I would not bet against modern climate models.
That was just one paper, true. Picked with google. But the IPCC Sixth Assesment report from last year is about as grim for the scenario with largest emissions they considered, SSP5-8.5, which I guess represents the "let's burn all fossils" case. These reports can be found at https://www.ipcc.ch/
IPCC is quite conservative in its reports, being under heavy political pressure from various governements to tone down its warnings.