* Posts by Michael Wojcik

3211 posts • joined 21 Dec 2007

Win some, lose some: Motorola 1, patent troll 1

Michael Wojcik
Silver badge

Re: WTF ?

How the hell is this not different to how fundamentally the Internet works ?

The Internet does not "fundamentally" involve "authentica[tion] by a third party". For that matter, it doesn't "fundamentally" involve uploading files.

Of course, I see you asked "how ... is this not different". Indeed it is different, so I suppose the answer is "in no way".

0
0

'Bar Mitzvah attack' should see off ancient and crocked RC4 algo

Michael Wojcik
Silver badge

Re: Very interesting

So an individual RC4 encryption is likely still fine, it's only when there are millions of them that one becomes statistically likely to fail. The odds of that one having a password, credit card or whatever are still low.

Agreed, but - as always - the threat has to be evaluated against a threat model. The paper specifically suggests using the vulnerability for session hijacking, for example. If the attacker is in a position to passively sniff a lot of RC4-encrypted sessions against a busy site, and make use of hijacked sessions, and automate the whole thing (obviously), it could be a credible threat.

It also notes that accounts with commonly-used passwords could be particularly vulnerable because a password table grouped by LSB greatly decreases the number of passwords that have to be attempted. So attacker sniffs a lot of traffic, detects weak keys (the paper notes how this can be done with good probability), and then later does active attacks using passwords from the appropriate subset. That requires a fair bit of additional exposure to work: the attacker needs other information (eg account name) and the attacked system needs a sufficiently liberal password-attempt policy. But it's not outside what's believable.

I've suggested before on Reg forums that the "RC4 must die now!" argument is a little too broad; that lots of traffic isn't valuable enough for anyone to bother attacking it, that probability of success is still low, that RC4 has other applications that it's still suitable for. But clearly the recommendation to use AES-GCM for session encryption (and one of the PFS EC key-exchange ciphers) is the more cautious position.

0
0
Michael Wojcik
Silver badge

Re: What's in a name?

Ah, but it's reminiscent of the well-known Birthday Paradox and related Birthday Attack against secure hashes. I'd rate it Pretty Good on the TLS Attack Name scale.

By the way, you left out the Be'ery and Shulman TIME attack.

0
0

No, really, the $17,000 Apple Watch IS all about getting your leg over

Michael Wojcik
Silver badge

And for today's fake analysis

... Tim decides to try sociobiology, the most suspect branch of sociology.

Can't wait to see what's in next week's column. Neuro-linguistic programming? Homeopathy? Dowsing?

0
0

Big Data high priest Stonebraker anointed with Turing Award

Michael Wojcik
Silver badge

Re: Good value

One site where you can see one of his presentations is http://slideshot.epfl.ch/play/suri_stonebraker, though I'm having trouble getting the video to play at the moment.

I'm looking forward to his Turing Award talk.

0
0

TOP500 Supers make boffins more prolific

Michael Wojcik
Silver badge

Re: Chemists are... @YAAC

But I seriously doubt that you are correct.

This entire thread is idle speculation with no real evidence offered for any position. But then I suppose when you're dealing with banal generalities like "people in field X will write better code", there's no point in trying to make a real argument.

Like it or not, writing efficient HPC code is still best done in a relatively simple language like Fortran, because you can get so close to the machine code

Some actual research shows otherwise. See the June 2014 issue of CACM, for example, for a paper demonstrating how deep EDSLs can produce more-efficient code than hand-written C or Fortran for certain HPC tasks, in part because they can do higher-order optimization.

The "close to the metal" myth has been thoroughly debunked time and time again. Humans aren't good at predicting how modern systems will behave. They often make erroneous evaluations of the effects of caching, branch prediction, speculative execution, etc. There was a good piece a dozen years ago in DDJ about optimizing a hand-written assembler rendering package on x86, where at one point the author enlisted Intel's help in figuring out why a linear search consistently outperformed a binary search - CPU traces showed it was the branch-prediction effects. Or see this piece on cache effects.

Meanwhile, languages specifically designed for HPC, like Julia, often produce code that's just as efficient as anything done in Fortran. And, of course, any optimization outside the inner loops probably won't matter (by Amdahl's Law), so writing the entire application (as opposed to just the time-critical routines) in Fortran doesn't offer any performance advantage.

The main advantage of Fortran for scientific computing is the vast corpus of extant legacy code, and the community of scientific-computing programmers who know the language.

0
0
Michael Wojcik
Silver badge

Re: Cause, meet Effect

Universities with strong research activities in relevant areas can justify and afford supercomputers

But not all do. And this is precisely the kind of data you'd use to see whether a Top500 system is, in fact, justified.

Really, the feebleness of the comments on this story is surprising even for the Internet.

0
0
Michael Wojcik
Silver badge

Re: Does it do correlation calculations?

There are R1 universities with Top500 systems, and R1 universities without.

0
0
Michael Wojcik
Silver badge

Re: Purpose of paper entirely misunderstood!

Have you read the paper? I haven't - $40 is more than I feel like paying for it, and since I'm not teaching this semester I don't have online access via the university library - but I'm just a little suspicious of random idiots on the Internet denouncing work they haven't read.

0
0

Favicons used to update world's 'most dangerous' malware

Michael Wojcik
Silver badge

Re: What will the stooopid do?

the lousy spelling and grammar don't inspire me to believe I am more clever than people who are going to be duped by me

Microsoft Research's Cormac Herley published a paper a few years back arguing that there's an economic advantage to making untargeted spam and phishing messages deliberately stupid. It reduces the number of responses from people who are going to later wise up and stop corresponding before the phisher extracts money from them. The paper's worth reading - it treats phishing as a binary classification problem.

Herley, with Dinei Florêncio, also argued, back in 2008, that phishing is a low-cost, low-skill, low-profit endeavor and that the profit from it is hugely overestimated. Once in a while someone gets a big payout - the stories that show up in the news - but for the most part it's basically subsistence scavenging. So it's generally not done by the people who create the malware or own the botnets. They lease their resources to the hordes of small-time phishers.

0
0
Michael Wojcik
Silver badge

Side channel is side channel. This isn't even a particularly clever one. Other malware has used HTTP image retrieval as a side channel; it's just one way to disguise your C&C traffic. It's a lot less ambitious than, say, IP over DNS, and even that is pretty obvious to anyone with an IT-security background.

People have used power consumption and timing as covert channels. They've used process IDs as covert channels. They've used apparent typographical errors in text as covert channels. Researchers showed that modem activity lights leaked enough information to be inadvertent side channels, so it wouldn't be surprising if someone tried using them deliberately as a covert channel.

Every channel is a potential covert channel. If it transfers information, someone's going to try to use it for something other than its intended purpose.

0
0

Becoming Steve Jobs biography: ‘Much of it was chutzpah and self delusion’

Michael Wojcik
Silver badge

Re: @ Ian Joyner

*I* am pretty certain that the vast majority of computers are purchased by business enterprises and governments.

The vast majority of computers, by CPU, are embedded systems. Many are still 8-bit. By now the wider-bus CPUs have probably surpassed the 8-bit cores simply due to economies of scale, but as recently as the early 2000s industry experts were claiming that around 90% of cores sold were still 8-bitters.

The majority of computing by jobs performed belongs to big back-office business systems - financial transaction processing and the like. The majority of computing by compute cycles performed is probably in scientific supercomputing. The majority of I/O is probably in network routing.

If we want to talk only about computers that ordinary folks use directly and explicitly, the majority are feature phones and smartphones.

Desktop/laptop PCs are a small minority.

(The OP's screed about "computers controlling people" versus "people controlling computers" is equally ridiculous, but who has time to take that mess apart?)

0
0
Michael Wojcik
Silver badge

Re: NeXT

Tim Berners-Lee had one.

Shrug. I had one too - well, more precisely, used one a few times when I was working at IBM. It was OK, but I got real work done on the modded PC RT and pre-release RS/6000 I had at my desk. The NeXTcube was a little underpowered for the stuff I was doing.

Wrote a system/program with it : the worldwide web.

TBL wrote httpd and the WorldWideWeb client. He did not write "the worldwide web", any more than BBN "wrote the Internet".

And, frankly, it would have been just as easy to write httpd on pretty much any UNIX workstation of the day, and WorldWideWeb was certainly feasible on anything with a graphics head. NeXTstep did make writing GUI apps easier, I'll grant you; I wrote a handful of quick & dirty Xlib apps back in the day, some "raw" and some using widget sets like Xtk, Xaw, etc, and it was more trouble than it should have been. (Better than writing for MS Windows of the era, though.) But the "www" line-mode browser was developed at the same time, and would have worked just as well to demonstrate the concept.

0
0
Michael Wojcik
Silver badge

Re: He's long dead...

OK, got my vote for Troll of the Week. Concise, inane, offensive, and posted anonymously. Hits all the buttons.

0
0
Michael Wojcik
Silver badge

Re: Perfectionist @AlBailey

When's the last time you connected your phone — whatever variety — to any computer?

Today. What's your point?

0
0
Michael Wojcik
Silver badge

Re: Perfectionist

iTunes is reviled, and rightly so, only on Windows.

I revile it just as much on OS X, after I had to fix its stupid catalog for my wife when she moved her music collection to a new external drive. Idiotic black-box design.

0
0

First figures in and it doesn't look good for new internet dot-words

Michael Wojcik
Silver badge

Re: In other news

Where does baby oil come from?

When a momma oil and a papa oil love each other very much...

0
0

Apple takes ACID-compliant NoSQL upstart FoundationDB

Michael Wojcik
Silver badge

Re: Not had much of a chance to play with NoSQL in a war zone

No. SQL is a language. ACID is a set of consistency guarantees - or, more precisely, consistency goals, which are guaranteed when further requirements are met. Relational DBMSes provide ACID guarantees when the database is in normal form, for example; but the relational algebra is not the only way to prove those guarantees are satisfied.

And these days, "NoSQL" is typically glossed as "Not only SQL". It's a dumb term, but then so much of our jargon is.

2
0
Michael Wojcik
Silver badge

Re: FOSS that you can rely on

That requires a bit more in the way of citations for me to accept any part of the premise, let alone the conclusion.

Indeed. I certainly don't believe FOSS is a silver bullet, and I make my money from proprietary, closed-source software, but that post had so much handwaving I could feel the breeze here.

"Game theory mumble economics mumble ignore the exceptions mumble QED!" I'm reminded of Professor Frink's "Elementary chaos theory tells us that all robots will eventually turn against their masters and run amok in an orgy of blood and kicking and the biting with the metal teeth and the hurting and shoving."1

A bit surprising there wasn't a "quantum" somewhere in there too.

1See also Hawking, Musk, Woz.

2
0

Hawk like an Egyptian: Google is HOPPING MAD over fake SSL certs

Michael Wojcik
Silver badge

Re: Bye bye CNNIC

I'm surprised - there doesn't seem to be a Firefox extension for whitelisting CA certs, like a NoScript for PKI chains. I wonder if there's a technical reason for that (haven't looked at the Firefox add-on interface in a long time), or if it's simply that no one has written one.

It'd be annoying for the first little while, but I'm willing to be that pretty soon I'd have whitelisted all the CA certs I legitimately expect to see until the next update. And when a non-whitelisted root or intermediary comes up, the extension could do quick CRL and OCSP checks.

Maybe a project for my next holiday.

2
0
Michael Wojcik
Silver badge

Depends on your local legislation, and your company policies of allowed uses of company equipment. Some searches could violate both.

That doesn't mean undermining the public PKI to intercept encrypted Google requests isn't malicious. Possible illegality or violation of organization policy by the users doesn't vacate the malice on the part of the snoopers. It may make the snooping legal; it doesn't make it good.

Perhaps you're familiar with a little maxim about two wrongs?

1
0

Prez Obama cares about STEM so much he just threw $240m of other people's money at it

Michael Wojcik
Silver badge

Re: Stupid title

I mean diversity of thought, not necessarily skin color

You mean the "diversity of thought" promoted by the disproportionate buying power of the Texas Board of Education, for example? With diversity like that, who needs prejudice?

0
0
Michael Wojcik
Silver badge

Re: Stupid title

Once the tax has been deducted from my wage, the money is no longer mine.

Arguably it's never "your money" in the first place. We have a money-based economy because we have a collective agreement to treat money as a proxy for value. That agreement persists only because we live in a functional state.

There are examples of "bottom-up" money operating in what are essentially anarchic conditions - for example in Somalia, where a large segment of the populace revalued the notes issued by the last functional government and agreed to treat them as valid tokens of exchange. But that doesn't scale; it's certainly not going to fly in international markets, where the best you can hope for is to have specie valued for its substance.

In short, we have money because we pay taxes, not in spite of paying taxes. And thus the government can't "take our money" because the money all always already belonged to the state as a collective.

Yeah, I know that's not going to be a popular view. Downvote away.

0
0

ROBOT INVASION has already STARTED in HIPSTERLAND

Michael Wojcik
Silver badge

Re: Cheap flexible version

We just use Lync. No need to carry anything around.

"The despair and futility are built in!"

0
0
Michael Wojcik
Silver badge

Re: Cheap flexible version

I just have an unpaid intern carry around a tablet running Skype.

Huh. I just have an unpaid intern slap anyone who suggests we need a telepresence anything. Seems to save a lot of time.

0
0
Michael Wojcik
Silver badge

Re: One and only

These things will work best on a timeshare basis, allowing people working from home to attend a meeting 'in person'.

I didn't realize the domain of "best" went that small.

0
0

Dell denies 'insecure autoupdate app' flings open PC backdoor

Michael Wojcik
Silver badge

Re: And other manufacturers' autoupdate, Mr Forbes?

This particular vulnerability is due to how the Dell utility receives and authenticates requests, and what types of request it supports. The Lenovo and HP update-helper apps may well have security holes, but they're unlikely to be the same as this one.

0
0
Michael Wojcik
Silver badge

Facepalm x 2 combo

From the blog post:

if (absoluteUri.Contains("dell"))

That's how it authenticates the request. "Does the string 'dell' appear anywhere in the Request-URI?" Someone at Dell needs a stern talking-to.

1
0
Michael Wojcik
Silver badge

IIRC it's a browser plugin.

The version that's described in the article is not a plugin. It's a little app you download from Dell that runs in the Windows taskbar. The Dell service web page communicates with it over HTTP using Javascript. I think Dell serves a plugin if you're running a browser they support, and the standalone app otherwise.

The quick way to check for the vulnerable version is to look at what's running in the notification area of the taskbar.

0
0

Hey, Woz. You've got $150m. You're kicking back in Australia. What's on your mind? Killer AI

Michael Wojcik
Silver badge

Re: AI & market forces

Pretty easy to see that within our lives a company could exist with few or no human employees.

Eminently doable now, in some industries. Certainly there are any number of IT fields where a smaller player could be completely automated, with humans only involved in the legalities of keeping the company running (and collecting a paycheck). Think intrusion detection or penetration testing, for example. And then there are fields like automatic book writing - a highly lucrative enterprise which is nearly entirely automated (see Phillip Parker). Small manufacturing: set up the production line, sell via online B2B sites, automate your delivery process and supply chain. And so on.

But automated business doesn't require self-actualizing machines that consider counterfactuals and develop and execute projects. Automated business can be entirely cybernetic. The proactive side of business is the more interesting part and the part that humans are most inclined to believe they can do better.

0
0
Michael Wojcik
Silver badge

Re: Not Quite Right

I suppose this could be "more likely", but frankly from where I'm sitting the probability of either is asymptotically approaching zero.

Human civilization is a few thousand years old. One decent catastrophe and it's gone. Even a fairly minor one could set things back for decades - think a supervolcano eruption that cools the earth significantly for a decade or two and causes mass die-offs; that'll severely crimp most AI research budgets. I think odds are pretty good that we'll be gone before the self-actualizing machines or the mind-machine synthesis arrive.

0
0
Michael Wojcik
Silver badge

Re: But why would they kill us?

Every piece of Science Fiction I've read on the subject has never actually mentioned anything about the AI's End Game. Sure they want us humans dead, but what is the reason for doing so?

Eh, see, you should read Charles Stross. The Eschaton wants humans alive, so they can go on to invent it. It's a standard ontological loop.

0
0
Michael Wojcik
Silver badge

Re: No wonder Apple won't talk to El Reg

I'm still puzzled by "Woz, best known for his sponsorship of the 1980s US Festivals of music and culture". Was ... was that a joke? A bit subtle for the Reg. (There's a variant of Poe's Law at work here: a sufficiently subtle in-joke is indistinguishable from stupidity.)

0
0
Michael Wojcik
Silver badge

Re: No AI for computers ever

When Steve Wozniak, Elon Musk and Stephen Hawking all say the same thing, we should listen to their every word on the subject. These are some of societies most intelligent outliers, geniuses that are almost able to forsee the future.

I think that must be the stupidest version of the argument from authority that I have ever read.

What evidence is there that Wozniak, Musk, and Hawking are "some of societies [sic] most intelligent outliers"? How large is that group? "Outlier" in what sense, and why does it make them authoritative on what is, in any analysis, a complex and highly dubious hypothetical question about a complex and highly dubious field of endeavor?

And as for "geniuses that are almost able to forsee the future": that's a load of unmitigated crap. What other similarly bold predictions have they made, and how many of them have been correct? The only one that comes to my mind is Hawking's about the Higgs boson, and that's not looking so good.

More importantly, authorities are wrong about matters outside their fields of expertise all the time, and (thanks to Dunning-Kruger and other psychological traps) people who are authoritative in one area tend to be even more overconfident in others. Linus Pauling received two Nobel prizes, and plenty of people jumped on his megavitamin bandwagon, but he was wrong, wrong, wrong. History is full of geniuses who occupied their time between flashes of great work with ill-advised, unproductive mediocrity and often outright rubbish.

0
0

Ten things you always wanted to know about IP Voice

Michael Wojcik
Silver badge

100 is bare basics nowadays and covers all the bog-standard stuff, access control interfaces, etc. but Gigabit to the desktop is the norm an expected.

Oh please. I only upgraded my home switches to 100Mb because the old 10Mb ones died. All the family's on wireless anyway, of course, and my wired machines pretty much never needed more bandwidth. And since I'm working from home, most of the traffic (including, yes, VoIP, desktop sharing, etc) is over the WAN, which means a VPN on top of a commodity cable ISP connection that's a few Mb/s at best.

In my younger days I worked on teams doing remote X11 over shared-media 10BASE-2 and 10BASE-T with a dozen machines in the collision domain. And we all managed to get our work done. Later I worked from a remote office with a 56Kb line and Frame Relay connection to the main office; and after that from a home office with 64Kb BRI ISDN.

Kids. Lawn.

0
0

Mature mainframe madness prints Mandlebrot fractal in TWELVE MINUTES

Michael Wojcik
Silver badge

Re: Pi in the, er, print

Nowadays this long-length arithmetic would be useful for cryptography.

Not really. The bignum arithmetic algorithms used for modern cryptography are much more efficient than BCD. Modern crypto typically uses the faster variations on the Karatsuba algorithm for multiplication, for example. And with modern crypto you're often doing arithmetic in a Galois field and more complex algebraic structures, where BCD would only get in the way. (People are always publishing new arithmetic algorithms for cryptographic math.)

For fast large-range arithmetic you don't want BCD anyway. BCD wasn't designed for speed; it was designed to represent radix-10 numbers exactly, primarily to avoid rounding modes that didn't match the pencil-and-paper arithmetic business computers were trying to replace. In some cases that was necessary for legal reasons; in others it was necessary to keep an organization's accounts consistent.

Even simple fixed-point large-range pure-binary representation should outperform BCD on a modern system with binary ALUs. Even if a system has both binary and BCD ALUs, binary's denser format provides better locality of reference.

0
0

HUGE Aussie asteroid impact sent TREMORS towards the EARTH'S CORE

Michael Wojcik
Silver badge

Re: Devonian?

I always end up with the vision of a T-rex with a bow tie and a laser gun.

Weird. I always think of T. Rex as more the bolo-tie sort.

0
0

Scientists splice mammoth genes into unsuspecting elephant

Michael Wojcik
Silver badge

Re: Another announcement, another project of dubious worth

Reviving the mammoth is more than thirty years old as an idea.

And isn't what this team is trying to do, as the article explicitly makes clear.

1
0
Michael Wojcik
Silver badge

Re: Darwinism In Reverse

Scientists too smart for their own good that got killed by their genetic experiments.

Oh yes, we've never seen that trope before.

Terribly dangerous research, this, splicing some genes into a genome and culturing some cells. Best keep the petri dish under lock and key.

Personally, I'm going to remain a bit more concerned about the wild-eyed folks at Monsanto et alia whacking random genes into commercial plant seed and the like, with essentially no idea what the consequences of widespread distribution might be.

2
0

Boffins twist light to carry 2.05 bits in one photon

Michael Wojcik
Silver badge

Re: Bah!

Yes, you can hide a needle in a haystack, but then you have to send the entire haystack along in order to move the needle.

And the recipient has to know how to find the needle. If you have a secure way to tell the recipient that, why do you need the haystack in the first place?

That is the Key Distribution Problem. Asymmetric cryptography is one way of addressing it; quantum key distribution is another.

The objections raised in the first couple of posts in this thread miss the point entirely. "What do I need a telephone for? I've been able to send an entire novel by post for years! That's a lot better than reading it to someone over the phone." Well, yes. And driving a car is a better way to travel a thousand miles than walking; but it's not really suitable for going from the bedroom to the bathroom.

2
0
Michael Wojcik
Silver badge

Executive summary: Photons can do various things as they move along. If you can pick which of those various things they do when you emit them, and figure out which they're doing when you receive them, then you can use that to encode messages. This team has shown a way to use more of the things photons can do.

1
0
Michael Wojcik
Silver badge

Re: For those of you wondering, 2.05 = 7 ^ exp(-1)

They point out that the ideal value (F ->1) is 2.8

... because if you have 7 symbols with perfect reliability, you have log2 7 = ~2.8 bits - just to clarify that point.

Thanks for reading and citing the paper, by the way. I was puzzled about the derivation of the OP's formulation. As you say, you never know where e will rear its head, but the association in this case wasn't clear. I'm not even sure what a real-world interpretation of 71/e would be, so I also suspect it's accidental. But IANAM.

0
0

Firefox, Chrome, IE, Safari EXPLOITED to OWN Mac, PCs at Pwn2Own 2015

Michael Wojcik
Silver badge

3) 10'000 retards will demand "C"

Well, it's that or rewrite a lot of existing code. And while I think that's a swell idea in principle, it's economically infeasible.

0
0
Michael Wojcik
Silver badge

Re: I've tried Mercury…

DAM was talking about the programming language, not the browser. See the Wiki page he linked to.

0
0
Michael Wojcik
Silver badge

Why doesn't someone actually ask hackers to design a secure(er) CPU, OS and programming language from the ground up ?

Why doesn't someone ask doctors to design better people from the ground up?

As others have pointed out, security analysis and secure design are different fields, though related. And practical security analysis - finding and exploiting vulnerabilities - is very different from practical system design and development. "Hackers" is irrelevant here; those are simply different jobs.

And there's no such thing as a "secure" anything. Forget that concept - it's meaningless. Even "more secure" is only meaningful in context.

And we already have CPUs, operating systems, and programming languages designed to be more secure than the popular ones against common attack vectors. We have capability CPUs (Intel i432, IBM's i version of POWER). We have Orange Book A-level certified OSes. We have languages like Ada, Erlang, and Haskell.

For the most part they haven't seen wide use1 because of cost. CPU security features cost performance. OS security features slow down users, developers, and administrators, and require more highly-skilled, less-common staff. Language security features require less-common programmers, who can charge correspondingly more for their labor. And businesses have a huge investment in existing systems which they are disinclined to simply throw away.

Even in an era when businesses can suffer large, expensive, embarrassing losses due to security failures - think Target, for example - the economics don't favor switching to equipment with more security features. Target can't afford the capital expenditure to switch to a POS system written in Erlang and running a formally-verified OS. For one thing, they'd have to build it themselves, with staff they trained.

1The IBM AS/500 / i machines are successful, but their numbers are dwarfed by x86, obviously.

0
0
Michael Wojcik
Silver badge

"uninitialized stack pointer"

We'll know for sure when the vulnerability is published, but I suspect this was intended to mean "an uninitialized pointer on the stack".

I'm not sure how I'd go about doing that in any of my code, even if I wanted to.

If you want to actually set the stack pointer (assuming you're executing in an environment that has such a thing) to an invalid value, there are generally a few ways you can go about it. Thread implementations typically keep a stack per thread (there are other ways to do it, but they're less efficient), so whacking internal threading data structures can do the job. Exception-handling mechanisms may be coerced into it. In C, muck around with the internals of a jmp_buf and longjmp with it. That last even gives you a way to set it to an "uninitialized value", in some implementations.

1
0
Michael Wojcik
Silver badge

Re: Dare I say it...

Twice as many as all versions of IE.

A meaningless statistic, since reporting isn't standardized.

I'm no fan of Chrome - I only ever use it to compare against other browsers when investigating a rendering issue - but your comment is pointless.

0
2

Ancient SUPERNOVA EXPLOSION contains enough dust for 7,000 EARTHS, say boffins

Michael Wojcik
Silver badge

Re: What, precisely, is the news here?

That speculation is confirmed by evidence?

It's not like there were a bunch of competing theories1 for the origins of the heavier elements. But a basic aspect of scientific epistemology is not stopping at the point where you say, "yeah, that sounds likely".

More specifically, in this case, these observations provide evidence to address the question of "when heavy elements are produced by a supernova, are they just swept back in by the rebound wave?". Because if the answer to that was "yes", then we'd have a bit of a puzzle.

At any rate, that's my understanding of this particular bit of research.

1Explanations relying on the supernatural, yes; theories, not so much.

0
0

AI guru Ng: Fearing a rise of killer robots is like worrying about overpopulation on Mars

Michael Wojcik
Silver badge

Re: Looking beyond the end of Ng's nose

Will another 60 years of AI research produce a truly intelligent machine? Almost certainly.

For exceedingly small values of "almost", perhaps.

0
0
Michael Wojcik
Silver badge

if we continue our research into advancing 'machine learning' towards real artificial intelligence (whatever that means) and on towards sentience (again, whatever that means)

Very little active ML research, and not much AI research, is aimed at "advancing ... towards sentience" (or what you're calling "real artificial intelligence", which I suppose we can gloss as what some people call "Strong AI" or "human-like AI"). At improving ML so it can take on more tasks normally delegated to humans, sure; but making a human-like machine intelligence has largely fallen out of favor in the research community. Where it persists, it seems to mostly be attempts to better understand human cognition by creating ever-more-complex machine analogs.

And "sentience" is probably not a useful term here. Etymologically and traditionally it simply means "feeling" or "capable of perceiving sensation", and as such applies to a vast range of entities, including arguably any cybernetic system - so we're surrounded by sentient machines already. More narrowly and recently it's been used to mean "having a sense of self", which is trickier, because our models of self (philosophical, psychological, and neurological) are conflicting and unsatisfactory, but again that very likely applies to lots of more-complex organisms and arguably to some machines as well, which contain logical state information that represents the functioning of their material incarnations.

In some quarters, for a century or so, "sentient" has been used to mean something like "a human-like sense of self and capacity for cognition", but that's a strained usage at best, and seems to come largely from SF writers trying to sound impressive.

A better term is probably "sapience", which etymologically means "wisdom" and is used as a term of art in philosophy to distinguish thinking beings - Dasein in Heidegger's sense - from all other entities. Even with sapience, though, it's really not clear what we mean when applying it to machines - and it's particularly unclear what attributes of it people like Musk are concerned with. Are they worried about machines that can desire? That can imagine? That can emote?

I think the starting point to any reasonable discussion must be an agreed-upon definition of the terms - especially 'intelligence'.

Good luck with that. European-derived philosophy - which is a relatively homogenous school of thought, compared to the entire range of human intellectual endeavor - hasn't come to any consensus there. And computer science doesn't show any signs of doing better. Since people are still arguing over the Turing Test (and generally completely failing to understand it in the first place), I wouldn't look to the tech disciplines to agree on the matter either.

0
0

Forums