Re: None of these people have seen "Eliza"
there are a number of Elizas on line
No need for an Internet connection, even. Emacs still comes with the "doctor mode" (M-x doctor).
[Ducks and runs from vi zealots...] --->
1196 publicly visible posts • joined 19 Dec 2012
People learn incrementally: building their knowledge on what the[y] already knew. [... So it should be with training AIs.
But that's the whole point, right? Knowledge is not a characteristic of "generative AIs", as far as I understand, nor is it the basis of their operation, so they cannot be trained that way.
I imagine next time I shop for a dishwasher it will be impossible to buy a dumb unsociable one. Does anyone know if those things are smart enough to randomize their MAC addresses or something (damn, I am giving them ideas!) or will it be possible, e.g., to tell the WIFi router to give it an IP address (so that it doesn't get it from anyone else) and then block all incoming and outgoing traffic from it? Will it refuse to wash the dishes then?
Asking for a technically-minded friend, of course. I myself don't mind Baidu, Yandex, and Google knowing when I wash the dishes, what was for dinner from built-in forensic analysis of residue on plates and pans (information that will be passed to insurance companies if it is determined that my dinner was unhealthy more than 4 days a month), or when to call me to offer "extended warranty". Of course not - I am not that paranoid.
I can't emphasize the "naively" bit enough, but a pub/sub system with a web (or not web?) interface that airports, meteo services, FAA, and other authorized parties would feed and pilots would be able to access even from their phones before or even during the flight seems to fit the requirements. Linking it to the (planned?) flight path seems doable, too. By now humanity has learnt how to make these things quite performant, reliable, and secure. Scalability is probably not a big factor (the article mentions 11K flights grounded when it all went down - that's 11K consumers, right?). I suppose the threat of DDoS can be addressed, too. What am I missing?
I certainly may be missing something as I have no experience in the intricacies of NOTAM requirements. If anyone can post enlightening comments i, for one, will be very grateful.
I asked myself the same question half way through the article, but as their alternative solution is a managed hosting service, just not on AWS, at least some of that is there, too. Would be nice to see a more detailed comparison. Any chance of seeing the cloud vs. local hosting provider margins? Yes, yes, too much to ask, got it.
Apt, like every other conventional package manager, has no "undo" function.
I'll raise you "dnf history rollback" and "dnf history undo".
I am adamant that dnf is as conventional as they come. No, it probably does not target the intended audience of Endless OS, but neither does apt.
Let's consider a 120hp car. In the past 120 horses looked quite powerful, but for today's cars - rather large and heavy, for convenience and, above all, safety - it doesn't see all that excessive. 120 horses equal 90kW. 840W of a computer is less than 1% of that - can we ignore it?
There are bigger problems to consider. Look at the UK. According to the government [PDF] total electricity demand - that's everything: industry, transport, military, offices, households, and data centres, too - in 2021 was 334TWh. Supply was actually a bit lower - the UK is a net importer of electricity. Total generative capacity is 76.6 GW. That's fossil, renewables, nuclear - everything.
Now, consider a 120hp=90kW car (autonomous or not) driven for 1.5-2 hours per day (daily commute, school run, supermarket, pub, visit friends/family). It won't use it's max power all the time, so reduce it to ~1h or 100kWh/day as a nice round fiducial number. If you want another number feel free to scale what follows - what's a few KWh between friends, eh? Now, the government says there were more than 40M registered vehicles in the UK in 2021 (same year as the electricity figures). Let's say we want a relatively modest, but significant, step of making 25% of those electric. That's 10M, consuming 1TWh/day or 365TWh/year - more energy than the UK consumes or produces in total today.
Now, let's say my EV consumed 100KWh on Tuesday. According to the wet dream of EV enthusiasts I will be able to replenish that by plugging the car into the grid during the night between Tuesday and Wednesday, and there will be no problem either finding a station or spreading the charge over hours. Forget that I want it to happen in 3-5 minutes I spend filling up the tank today - I am completely on board. If I have 5 hours to charge the car I will be drawing 20kW from the grid. 10M cars like mine will draw 200GW during the night - almost 3 times today's grid capacity. That's assuming 100% efficiency, etc.
It looks to me that before we can plug a significant fraction of our vehicles into the grid we'll need to increase the grid capacity at least a few times. it sounds to me like a really big project for which we'll need to pay through the nose and frankly I don't see how this can be accomplished by 2030 or whatever "the deadline" is.
On the plus side, I am not terribly worried about the carbon footprint of data centres in this context.
[I]f the answer is yes and, indeed, if the answer is no it is most likely not ChatGPT.
I played with it a little bit, and one thing I could never manage was to make it give a simple, straightforward answer. Certainly it never gave me a yes/no answer to a yes/no question. Of course, this is a direct consequence of how it works: the basic algorithm is something like "what is the most probable next token (where token = a word, a phrase, or punctuation) that would follow this text fragment (prompt) based on the statistics of the corpus of text that is your training set?" I find it rather amazing that the algorithm that can be stated in a single simple sentence would give such good results just because you get good statistics from a large enough sample. But once it comes up with a few sentences it never reduces the answer to something simple like yes or no or maybe.
Rather, ChatGPT sounds to me like a rather bad grammar school teacher who repeats memorized texts no matter what the question is. If you pardon what sounds like a pun but is actually quite serious, it sounds robotic.
So you can ask it if it wrote something, but then it will be up to you to decipher the answer that is likely to cover every possibility.
If the "bad guys" in question are Chinese then I suspect even prohibition on punctuation marks wouldn't inconvenience them too much. Prohibition on letters of alphabet or digits - even less so.
I don't know Chinese, so any more authoritative comment on the sentence above will be appreciated. My point is, rather, that the comment I am replying to may be even more to the point than originally intended.
One can probably figure out how many of those 400M+ accounts are bots. If the result is noticeably above 5% then can he demand $44B back based on a) misrepresentation, b) cybersecurity incompetence?
The one with a folded piece of paper saying "393.5M bots" in the p[ocket, please... --->
I looked it up, grammar is also an intransitive verb
Citation needed. I got curious, but could not find a verb entry for grammar in Oxford, Cambridge, or Merriam-Webster dictionaries.
This is not to say you are wrong - I am genuinely interested. I don't recall ever seeing or hearing "I learnt how to grammar at grammar school" or "He always comments his code but he can't grammar to save his life". If these are valid grammatical structures I'll look forward to wiggling them into a conversation one day.
If grammar can be used as a verb the "intransitive" part is also counterintuitive to me. If you can spell a word you should be able to grammar a sentence.
Wait, was it a Turing test? And did I fail?
"Henry VIII", "Hammurabi", and "Paleozoic" pages were created by historians and geologists (maybe with paleontologists), not AI experts who would be well advised to create more pages about the history of their own field. John McCarthy and SAIL are positively Paleozoic, maybe specifically Cambrian (in the explosive sense) in the context. Try googling SAIL and count the organizations that you find before you get to Stanford AI Lab. I was shocked - and I knew what I was looking for.
FWIW, the phrase "Silicon Valley thinks that history starts with Google's IPO" (or something pretty close) was coined by an eminent (Stanford) historian. I'll leave googling it up to you.
It's not that Google has amnesia. It's just that history starts from its IPO, so there is nothing older than that to remember in the first place.
SLAC = Stanford Linear Accelerator Center. Maybe Encyclopedia Britannica can be a good start? In any case, with SLAC you are in the right place with slac.standford.edu.
SAIL - Here the Internet may be even more confusing. There are so many places whose founders liked the SAIL acronym that you get lost Googling. Stanford Artificial Intelligence Laboratory goes back to the times of John McCarthy. A seemingly separate page provides a <cough>highly personal</cough> overview of history that, among other things, claims that just about every bit of technology one has ever heard of (including, khem, Google) originated from there. Some of it is very likely true, and at the very least it is entertaining.
Damn, I am old.
"AI code assistants, like Github Copilot, have emerged as programming tools with the potential to lower the barrier of entry for programming..."
Ouch!
I've skimmed the paper, mostly in search of a possible bias that would make the experimental group (that was allowed to use AI) less experienced (or otherwise handicapped) compared to the control group. To the authors' credit, not only they assigned participants randomly to the control and experimental groups, but they also provided handy comparison tables of the two demographics. At first blush, I don't see an obvious bias. Actually, the average experience of the experimental group is somewhat higher than of the control group. And both groups were allowed to browse the Internet, so StackOverflow and friends were available to both groups.
OMG! Not only does AI exhibit Dunning-Kruger traits but it also amplifies them in humans, eh?
It is obvious that Twitter future CEO cannot possibly be one of those. You didn't consult HR, did you? Just how privileged all these heterosexual white males (mostly - see Holmes, Elizabeth) are? This will certainly cause a twitstorm if the procedure is disclosed in Twitter Files!
No, no, the perfect candidate is Sam Brinton. Gender-fluid, US administration experience, an expert on nuclear waste disposal (what is Twitter if not nuclear waste or something equally toxic and dangerous?), and a proven record of no morals or scruples even when picking up other people's luggage at airports is concerned. Likely explainable by lifelong lack of privilege, of course. And - how could I forget! - foolishness!
Ouch. Just noticed[*] that the above link is from NY Post. Here is a version that can even be tweeted.
[*] You don't really think that I needed to look the name up and NY Post was the first on Google, do you?
Nice article, I like the angle a lot, but I am still struggling a bit to wrap my head around the math...
It looks to me that the author assumes a steady state where each IoS article produces $0.03 of revenue for a year and the business started a long time ago so that there is already a body of revenue-generating content at t=0. That's how one gets $0.03/day*(365days)*(20articles/day/writer)*(365days)*(100*writers) = $8M. I am kinda dubious that a typical IoS article will really generate clicks-through for a whole year, and the "you only need to run this business for a year" bit is not quite aligned with the implicit steady state assumption.
It seems to me more realistic to measure click-through revenue of IoS in weeks rather than years. If each article generates revenue, on average, for X weeks and the business runs for 1 year (this is a lot closer to the steady state assumption) then you'll have a grand total of ~$150K*X of revenue after a little bit more than a year. At $10/day/writer you'll break even if your IoS output remains relevant in searches for something like 2.5 weeks. Does anyone have any idea what is realistic for IoS "relevance duration"?
I don't quite see where the style document prohibits RAII, but I only gave it a quick look. Mentioning RAII is very much on topic though.
In my rather long career of writing C and C++ code I never found safe usage of memory difficult. All you need is a bit of discipline. In C++ RAII is an important technique to promote such discipline. C/C++ don't prevent you from being careless, but this does not mean you should be. It's not difficult to be careful at all, though it does require a bit of effort to think of ownership and the full lifecycle of your objects. That, incidentally, is crucial for lots of other reasons beside memory safety.
All the talk of the advantages of "memory-safe languages" is basically "elf'n'safety" cushioning, a.k.a. not trusting engineers to be adults. There may be reasons for that (and yes, I've seen rather spectacular examples), but it does not address the root cause. I have no hard data, but I strongly suspect that many people who blame C++ for the fact that they never bothered to learn to use it effectively won't be effective, and will write dangerous code, in other languages as well. On occasion they will write so much code to work around language restrictions to do something fairly simple that the result will be dangerous just because of that. Some other people can use C++ effectively themselves, but blame it for their lack of trust in others whom they refuse to treat as adults who can be educated.
Some parts of Google's style guide certainly smell like not trusting programmers. While I didn't find a prohibition on RAII, the prohibition on forward declarations is a case in point. I would rather encourage Google programmers to use forward declarations as much as possible, to loosen compile time dependencies. I imagine Google may have a few large projects where this will be particularly important. The main justification of the prohibition is based on a contrived example involving ambiguous code. Well, C++ allows you to write ambiguous code almost as a matter of philosophy, but it doesn't mean you should. I looked for a "don't write ambiguous code" style guideline, but didn't find it. The last argument in that section looks downright weird as well. If Google engineers write code like that they should think of ways to address the real issue rather than blame the tools.
Disclaimer: the above must not be construed as implied criticism of Rust.
[O]nly the specialised plugs could be inserted. That defeated the cleaners and sundry hardware users.
You were blessed with unusually tame cleaners, I'd say. Not even unplugging critical equipment before discovering a Hoover/Dyson couldn't be plugged in?
Yes, I did notice the "hard to unplug" bit. Still, that's only an obstacle for the unusually tame, isn't it?
Bankman-Fried’s public donations went largely to Democrats. (bold emphasis mine - T.F.M.R.)
My parsing of the Grauniad's piece is: he "donated about the same amount to both parties,"[*] but took care to hide donations to GOP "because reporters freak the fuck out if you donate to Republicans"[*]. The quoted sentence is certainly in the article, but refers to the public part of the donations only.
[*] This is a direct quote - don't kill the messenger.
So The Reg counts 3 years between 0.5 and 0.8 and Redox is not quite there yet. If memory serves, Linus Torvalds started his "student project" in 1991. By 1994 Linux was my major platform in the academia. It was not just usable - it had the same UI as the most advanced Sun workstations of the time (and was vastly superior to Windows of the period that I tried and abandoned in disgust) and ran on cheaper hardware (no VMs on COTS HW at that time). You had your personal resource rather than an X Terminal to share a departmental Sun. You could compile the kernel and everything else on it, too - something Redox can't do yet, according to the article.
By 1995 it was difficult to find anything but Linux at 2 US universities I was then affiliated with. In 1996 - and ever since - it was ubiquitous in at least some industries.
Yes, I suppose the distance to cover was shorter then and the requirements today are different. E.g., Internet was simpler. But maybe the overall architecture of GUI on top on Open Look on top of X on top of kernel, with shells and tools thrown in, helped making it all usable fast? I am pretty sure that keeping those old system calls (I've given the "Why Redox?" page linked in the article a superficial glance) that allowed other stuff to run helped a lot. And therefore I am curious: does Redox have a version of libc (and maybe sockets and some other stuff) that sits atop its new syscalls? That has been done, too, e.g., for InfiniBand to bypass the kernel without re-writing applications. Would such a "distraction" help running "legacy" (FOSS) stuff for usability's sake without hindering the interesting OS R&D?
In any case, somehow (less than) 3 years from inception to being perfectly adequate for everything I needed for my thesis, later academic research, and still later industry work (none in computer science, though computer-heavy) seems to me a bit more impressive - in terms of pace of development only, mind you - than what this article presents about Redox.
- Using AI as "an ideal way to lose friends"... Hmm...
- There is a world championship (and, presumably, structured, regulated tournaments) in this thing? Olympics next?
- How long till this AI takes over real diplomacy? Layoffs in many of the world's Foreign Offices soon? Laid off Meta engineers hired instead?
- 12.5% (1 in 8) success rate on average? 25% success being a seemingly exceptional result. Is it better, worse, or at par compared with a) normal people going through everyday lives, b) top diplomatic professionals on the world stage?
OK, OK, I am going, but I expect the rest of you not to block my access to this coat that I own according to all applicable international laws...
FWIW, my Samsung does not object to me deleting the preinstalled FB and quite a few other apps I don't use. I noticed that after a system upgrade FB sneakily appeared again, just to be deleted - again. Otherwise, no problem at all.
The contact-tracing Google Play service[*], however (the Google-Apple one, I believe, supposedly privacy-protecting - hah!), cannot be deleted. It can only be disabled, and only if you enable "Developer Options", and it starts and must be disabled again after every restart. And I am not even in Massachusetts...
[*] IIRC it's called ExposureMatchingService or some such - can't be a..sed to reboot the phone to check. How sneaky of Google to hide it behind both GooglePlay and Developer Options! To be fair, I think the service cannot do much by itself without a health authority-approved application. It sounds to me like MA's DHP sneaked one of the latter onto people's phones without permission (with Google's help?) and that is what this lawsuit is about. But I may well be wrong.
Is the famously "big picture" investor optimistic regarding China's designs for Taiwan? "Optimistic" from where I stand, that is - Xi is watching Putin's failures, talking to POTUS and others in Indonesia, getting less than keen to stir things up against Taiwan - good for TSMC prospects, at least for a while?
@Ozan: "I miss the columnists. It's not just same with only BOFH around."
Not just that. Am I the only one who connects El Reg's shift to Colonial spelling with a distinct lack of real action on the part of the BOFH and the PFY recently? As opposed to threats and innuendos, effective though they may be?
If you have an internal resource valuable enough to warrant MFA then allow access to it only via the company's VPN in the first place.
Your employees need access to work, right? So connect to the VPN first. There will be no MFA bombing then unless the criminal is inside the VPN already, in which case you'd have a bigger problem.
it's oxymoronic to be permanently in a crisis
It used to be. The word "crisis" itself has been drastically devalued, just like - or possibly more - than sterling. It is really difficult to see, e.g., a news item without "crisis" being asserted. Closer inspection reveals that the new meaning of the word is "please give us more money for whatever it is that we think is important". The most frequent variant is, "Dear Government, please give us more money...", etc.
Once you realize that language is evolving (as described above), permanent crisis ceases to be an oxymoron.
It certainly has the look and feel of corporate BS, but it may actually be quite sensible when you do parse it. My understanding is, "let's overprovision data centres today, presumably in specs and numbers alike, in the hope that we won't need more CAPEX investment for replacements for longer".
I assume Meta cannot easily do what just about everyone else does: swap CAPEX for OPEX by renting servers in the cloud: 1) major cloud providers are competitors in one sense or another; 2) Meta's scale is too huge; 3) Meta's requirements are too unique even beyond scale. So CAPEX it is, and optimizing that over time is generally a good idea.
Their adopted approach may or may not be right - one can only hope that they have done a deep analysis before jumping head first.
I admit I have not read through all the comments, only through a significant portion of them. If I am repeating what someone else said apologies for missing it.
All the arguments about delivering vs. not delivering and so on are perfectly fine, but it struck me that another, very real issue, seems completely missing from the discourse. Assume you are an employee (not freelancer or contractor or consultant). In my experience, unless you manage to negotiate something else at the time you are hired, there will be a clause in your employment contract that contractually obligates you to work exclusively for your employer. This is not just about time, issues like intellectual property rights, etc., are involved to (typically you assign those rights to your employer as "work for hire" or whatever the terminology is in your jurisdiction). More often than not on paper it is phrased restrictively enough to require a written agreement from the employee to engage in any other activity, whether or not for monetary consideration.
It is rarely (barring a**sehole cases) about preventing you from having hobbies or playing for your neighbourhood rugby team or taking an art class once a week. Rather, the employer will be (legitimately, IMHO) interested in preventing potential conflicts of interest of all kinds, including, e.g., reputation and, significantly, you not getting burnt out by a night job or involvement in an election campaign or whatever it may be. The last bit is, in the end, about "delivering", of course, and it's long term thinking in most cases.
Over time I've held second jobs quite a few times (usually teaching or consulting) and I've been on advisory boards of companies and non-profits other than my main employer. That was invariably under an exclusivity clause in my "day job" contract, and permission was invariably given, after questions about conflicts of interest in both competition ("Is there a clash with the company's business interests?") and scope ("How much time will it take per week/month?") had been resolved in a conversation. Actually, I always take additional care to avoid creating an impression that my employer sponsors or supports or endorses, say, the non-profit's activities. That takes conversations (and attention and vigilance on occasion) on the other end, and it's something I insist on, without explicit stipulation from my employer.
The point is, you main employer has a legitimate say, and you are typically (your mileage may vary) contractually obligated to inform them. Breaking that contract is, normally and understandably, a firing offence and being above board is a matter of integrity and decency (IMHO). This looks conspicuously absent in these two cases as described. I guess I should say "four cases", what with 2 employees and 2 employers for each. And that, in my mind, is the essential issue whether or not the results were, in fact, delivered. And in at least 2 of the 4 cases they were not, apparently.
Disclaimer: I have no idea if those two guys actually had exclusivity clauses in their contracts. If not I'd say the former employer should also fire their lawyers. Ha!
Now, I would guess Musk and Bezos and Dorsey and their ilk do have their conflicts of interest declared, including both competition and scope (being CEOs of more than one company). As TSLA, AMZN, and TWTR are all publicly traded I suppose all that gets properly reported to SEC, etc. So the "above board" part is OK then, and apparently whoever pays their salaries (investors, shareholders through the respective Boards, etc.) are satisfied with the "delivery" aspect. If the satisfaction goes away I expect this to be reflected in either board meetings or share prices or, probably, both.
No company time was stolen in creating this post.
While I mostly agree with the arguments against making robots that walk similarly to humans, there is one lingering thought that remains a bit of an argument for. A robot that needs to navigate stairs (designed for humans) while reaching high enough (top shelves designed for humans, so a doggy won't do) and being not too bulky (so not a human-tall quadruped). Cf. Daleks.
Not sure the argument is all that strong, or that it was a part of the design, but it still makes a bit of sense.
Building spaceships that can contain one or more humans and keep them alive is much more expensive than building something that can propel a robot somewhere that is designed to withstand high G-forces, not need food, water, or air, can withstand large temperature changes, and so on.
That argument was used when Moon landings were being planned. The argument is absolutely correct from the engineering prospective. It was shot down by pragmatic politicians who wanted to push space exploration further but realised that they needed the public imagination behind "Let's put brave American boys on the Moon!" to get the votes for the budgets. "Efficient space exploration" would never (well, at least for some useful values of "never") get the political support for the (still astronomical) appropriations. It seems that Mr. Musk's marketing of colonisation of Mars draws on that lesson, too.
Marketing works.
[Disclaimer: the paper seems to be behind a paywall, so I have not read it.]
Wouldn't close friends, I dunno, call or email or whatsapp each other to ask for an introduction to an interesting company or a suitable prospective hire? As opposed to exchanging information on LinkedIn?
This would be less probable with weaker connections, and the weakest ones would tail off as observed.
Sounds to me that this is a possible explanation for the observed effect.
The temperature lags behind the sunshine. That's why the "peak" is 4pm to 9pm.
And not because that's the time when people are at home after work, doing stuff under A/C while utilizing multiple electric appliances for various things they need/want to do, and, incidentally, charging the batteries of their EVs, depleted after a trip to work and back?
I've lived in the US, including CA. It always struck me as odd how early people tend to call it a night and go to bed. 9PM does not sound unusual.