Re: I think that algorithms are already largely regulated...
Its the KGB hacking the British election process!
Algorithms are almost as pervasive in our lives as cars and the internet. And just as these modes and mediums are considered vital to our economy and society, and are therefore regulated, we must ask whether it's time to also regulate algorithms. Let's accept that the rule of law is meant to provide solid ground upon which our …
Its the KGB hacking the British election process!
There are two categories of algorithm that need to be distinguished in this discussion. The first is the traditional algorithm that uses a well defined set of steps to produce some result, such as an encryption algorithm. The second category is the ones using the new deep learning neural networks, as used by Google for image classification.
The first category is the software equivalent of a mechanical device - it does exactly what its user asks it to do (assuming no bugs). The second, which is what I think the article's author is concerned about, is more like the software equivalent of a dog - it can be trained to do what we want, but it is not entirely under our control.
While we can test, or even examine, the first category software, this is fundamentally impossible for the second. Its learning is distributed across a myriad connections that makes it impossible to examine for "correctness".
The solution, I suspect, is to test these programs in the same way we would test a living organism - by giving it an exam. An autonomous vehicle, for example, would need to be given a comprehensive driving test. Software for giving financial advice should also be put through tests similar to what a human doing the same job would need to go through. So, perhaps the author is correct - these programs need to be subject to the same sort of laws that apply to us.
Any algorithm is only in place because a human wanted it there. Even considering a sufficiently advanced AI that could create its own, the AI is only there because a human wanted it.
Our existing legal frameworks, as stated in the article, are tried and tested (although not always perfect or completely fair) against protecting one person from the decisions and actions of another. Any algorithm is just an extension of a person's (or group's) decision to bring it about.
Got denied a job because an algorithm on LinkedIn found a tweet you made years ago saying Gina Davis has a banging rack? You already have recourse.
Car crashed into an orphanage because the ABS algorithm failed as you attempted to take a corner? There's already recourse.
Medical diagnosis algorithm your doctor used wound up prescribing medication that compelled you to respond to articles on the internet? You've already got recourse... oh.
Could you imagine the nightmare of trying to make your software compatible with the legislation of multiple different jurisdictions? Gawd. Even just good ol' America. Each state would have it's own set of ridiculous standards to apply, as well as federal, then the UK, then the EU. By the time it wormed its way down here to Australia, nobody would give a pickled rats arse. It would be the end of us. We'd be back using rotary dial phones and riding bicycles, even getting our news from bits of paper!
And there are some who WANT that. Slow down the rat race, control the population and the exploitation of the Earth, and all that.
A can of quite alien worms has been opened wide right at the heart of core systems of SCADA administrations, Alexander J Martin, and the prognosis for the state of executive health is already decided in the correct interaction and proper engagement with what the future offers via A.N.Other Means and Advanced IntelAIgent Memes.?! ……. AI@ITsWork
And shared as a question exclaiming fact because as a fiction would it be too stealthy to steer and/or influence with concerned input to output. Although to be perfectly honest, would that choice be a lock only to be opened with the demonstration of one being worthy of handling the key.
Do you not all find it somewhat perverse and naive of historic politically biased systems imagining they will ever have command and control over future eventing machines with access to more information and intelligence than will ever be designed to flow to and/or through traditional closed elite executive order systems?
I've opened but not got too far along with, a book with the above title. It deals with exactly the topic being discussed here.
I think that a regulatory approach would be as pointless as outlawing stupidity. You have to keep redefining stupidity, and as soon as you think you're done, somebody moves the yardsticks. Even a requirement to file flowcharts, founders on the reasonable contention that the algorithm and its flowchart are trade secrets of the company.
Hold corporations accountable for their actions, whether generated by human, or by algos ( or a bit of both ).
I.e. ** enforce existing law **.
I had a months long battle with Talk Talk. They sent threatening letters that I could not easily reply to (they instructed me to phone a premium line). Couldn't easily reply, as they did not list the registerd address on the communications. This is against the law, and if it is being done routinely by a major company, it really should be absolute no-brainer for goverment to prosecute punitiavely. Why isn't that happening !?
They don't care.
For further explanation see comment above: #Could this idea be more backwards?
Presumably they'll use an algorithm to regulate algorithms. So we'll have a collection of algorithms that have one set of rules and the algorithms that we right that have another set of rules.
So like the Police. Seems logical.
Biting the hand that feeds IT © 1998–2017