hashtag?
Is that the thing that says my breakfast item is $1.40?
It looks like we should all learn Homer Simpson’s sock-puppet phobia. If this blog post is accurate, then corporates aren’t just briefing social media teams to “manage” their reputation on services like Twitter. They’re creating armies of software-driven sock-puppets to gang up on bloggers and commenters to swamp negative …
You know that DailyKos will find corporate threats to democracy and free discussion under every bed, in every managerial space (except the one currently occupied by the Peace Laureate) and in every potted plant with a slight a-progressive tinge.
Yes, their readers and the unionized worker proletariat are none too bright and are easily misled by blacktied-brownshirted (but presumably lavishly funded) alicebots.
What is the government doing against this?
It wasn't about AI's posing as people, it was about a small number of "reputation managers" posing as a virtual crowd, and astroturfing opposing voices into submission. I'd expect more paranoia from the Reg. The thought of some marketroid in an office somewhere single-handedly taking over entire discussions to distract from their employers' dodgy actions is something the Reg should be all over.
You don't seem to be extrapolating this to an obvious level of deception. For people who are not interested in flinging links into a conversation for product marketing purposes, but are more interested in influencing opinion and/or consensus, this is dangerous software in the hands of a skilled operator.
Of course, it would not have to be used in an obvious "I want banana NOW" way. To be effective, it only needs to be subtle to sway unsure minds and/or give the appearance of a group reaching a common point of agreement or to give the appearance of spontaneous, spirited opposition to something.
This is propaganda software and it can be dangerous and it should not be dismissed as "conspiracy under every rock." This is a digital equivalent of owning the printing presses. It's one way to really skew things. Put another way, people wouldn't likely be putting out bids for something like this if they thought it was a complete waste of time and unlikely to yield usable results.
I don't know about your democracy, but I like mine out in the open.
... no it isn't.
"Deceptive business practices" is, however, and that sort of behaviour should come with penalties, like jail time.
The corporates can now spray the noosphere with an even higher rate of turds per second, and with a greater degree of camouflage and deniability.
And as Unhandle points out, that's just the tip of a very blue iceberg.
If an in crowd can massage each others reputation up, the reputation graph will show many in-crowd internal links and few external ones. Sure evidence of a mafia, but it needs the kind of analysis to expose and discount which made Google very rich. Reputation is more valuable if it extends over a longer period, but sock puppets tend to have a short lifetime.
Currently el-reg only collects reputation scores on a single post. How long before this applies to a particular poster ?
You don't want to exclude new voices from a conversation, but you may want to limit how much they can say and in how many places they can speak.
Anyone who has examined the state of AI at any time in the last three decades (or ever used OCR software) will know that it is considerably more effective to pay people to do this, than use software.
Better plan. Sack the worthless morons who devise this sort of thing and use the money to make better stuff that people will actually like. Then they'll post nice comments about you for free. Yes, for free.
And perhaps start behaving like business folk again, making ever better stuff to make more money, instead of behaving like third rate politicians, trying to manage customers, manipulate your image and control the world using copyright legislation, lock-ins, 30% platform taxes and lawyers.
If you try too hard to be like a politician, you'll only succeed, because it isn't that difficult. Then you'll look like a prick and everybody will hate you. And you will lose sales.
Paying people or software to say nice things about you. What does that really say about you?
READY.
█
"Anyone who has examined the state of AI at any time in the last three decades (or ever used OCR software) will know that it is considerably more effective to pay people to do this, than use software."
I have you read the BBC's Have Your Say page recently, or the Daily Wail's letters page? Shouldn't be hard to come up with an AI that can do better than the sort that writes into them...
I for one welcome our coherent and well spelt AI overlords.
Assange caused the death of countless patriotic servicemen and betrayed his sources. Look at his personal history, he must be unstable. He's ugly too, and created both HIV and ebola. Now respond to this troll, just don't look at the content of the leaked documents and what they reveal about <%STRTABLE ERROR 32%>
Imagine the poor fools at HB Gary. On one hand their brand is now known to hundreds of thousands of people who had never heard of them. What phenomenal publicity they have gained!
Except that it must be embarrassing to be a security company that is shown to be ludicrously insecure.
At this juncture I'd say that everyone above janitor level is flailing about in a panic trying to find something, anything that might have a positive effect on their shrinking client base.
Likely the best thing to be done is to just let them twist in the wind until they file for bankruptcy.
(Paris, coz, like, she was hacked too remember, and had all of HER proprietary data stolen, and like, she cried like a baby too!)
Funny stuff. Did someone just smell the coffee and wake up over at Kos? I love the expressed outrage, it's almost as if this was ground breaking news and not something that has been going on for years. The only new bits here are that it's being openly discussed and the FedBizOops bit is just a sign that Uncle Sam is too dumb to realize that this would be more effective if you didn't tell everyone you're doing it.
In the long run this only undermines the trust folks have in the newer media formats which is undoubtedly why Kos' Happy is so upset. Then again, many of these people are shouting into an echo chamber anyway so the effect of corporate/ fed poseurs going in to try shouting down those they perceive as dissenters is going to be right near nil. Either that, or people will have to relearn how to read Pravda.
the shills are easy to spot. They talk just like the marketing droids that they are and come out with weird arguments such as:
"product A is the best because of its horizontal strategy mobilization"
Case in point: "I like Starbucks because of their friendly customer orientated personnel, allegations that their coffee tastes "like pee" are a false paradigm implementation. Our, I mean their staff are trained to the highest standards of rigorisation. I rebuke your insinuation that a company with 17.8% profit growth does not facilitate adequate coffee mouth feel."
Too many times I noticed rational arguments/opinions being met with complete dismissal before being finished with a flourish as to how good/loving/legally correct the object of the discussion is. Seems moral arguments mean nothing to these people.
Who can remember all the 'anonymous cowHerds' defending Vodaphone's tax efficiencies or when Israel shot up a boat carrying humanitarian supplies? The amount of down votes and truly stupid defences which seem to be posted in bucket loads by the cowHerds.
I'm sure the mention of Vodophone and tax in the same sentence will have them rushing to comment on this comment. It usually is something like....' Well Mr Tax Free Contactor......
I'm PAYE before they start
Listen very carefully, I shall say this only once!
HBGary's software was not a bot controller, but a set of "workspaces" where one (human) operator could get all relevant information for each specific "ident" he was assuming at the time, making himself pass off as tens of persons - a "force multiplier". The workspace would list his posting history on each website, local news, weather and time for the location where the "ident" was supposed to be, etc. Every "ident" had a human behind it, but one human controlled several "idents".
Got it, chaps?
I saw the original article on ${ActivityList} and as someone who deals with ${Client} about ${Topic} I can say that this is a complete beat up. When I researched this issue, I found a great article at {$ClientAstroTurf} that you should all go look at before thinking that ${Client} is evil.
As someone else rightly points out, this has nothing to do with bots or AI. The concept is to allow an individual (not a bot) to manage a number of social media accounts from one computer. Each account should have its own history and believable back story, name and location etc. There will also probably be network separation between each account to reduce traceability.
Not sure why so many people are shocked by this... i'm sure there are rooms full of people doing this all over the world.
J
> an individual (not a bot) to manage a number of social media accounts .. Each account should have its own history and believable back story ...
I used post to a forum where one "individual" used post tens of msgs an hour, 24 hours a day. I don't know how he ever slept or even earned a living. Once I was in discussion with the day shift and later on the evening shift had an attack of amnesia and couldn't recall our earlier conversation :)
I have also seen adverts for bloggers for the UK edition of Big Brother, where you hung about and steered the conversation in the right direction.
Once you get in a corporate setting there will always be one asshat who thinks he or she can promote the brand and by inference themselves by being a back stabbing idiot, then disappear into the background when it blows up.
Unless the company has an explicit "Don't be an ass" policy backed up with sanctions this will always go on in the corporate environment, especially when its tacitly encouraged by current business practice.
“Everyone is guilty of something or has something to conceal. All one has to do is look hard enough to find what it is,” from cancer ward, novel 1962; Google (Schmidt) have said (joked?) slightly more recently "that every young person will be entitled to automatically change their names when they reach adulthood in order to escape all the embarrassing stuff they did on social networking sites."
however in Germany this year a company allegedly denied a person a job because suspiciously there was NO FACEBOOK data found about him.
I think digital-footprint wars, with one side (US State via FedBizOps.gov) using Anonymizer IP Mapper and Anonymizer Enterprise Chameleon 'multiple persona swarm management' technologies means that internet Lusers will need balancing enabling technologies, non euphemistically .
So you, an intelligent, experienced reader, can detect some of the "black propaganda" (specific technical term) that is spread in online forums.
But you probably don't detect all of it.
Bland, average-intelligence comments may be made sincerely by bland-minded Internet users of average intelligence, or they may be cunning fakes.
It's like plastic surgery. You know how bad plastic surgery looks, but that's when it's done wrong. Done right, you don't notice, it just works. Nobody would do it if it always looked like crap.
In fact, EVERYONE IS AT IT.
Then there are independent, individual Walter Mitty types who fantasise about their wide and mostly imaginary experience in military service, IT, politics, and give you the benefit.
And then there are the mentally ill / personality disorder contributors.
You see all of the above in comments on ZDNet's web site, for instance.
And I believe it discourages sincere participants from making an effort to contribute.
Is it so easy to spot such puppets? At what point does a crowd's enthusiasm for a product overtake the reality of a product and blindside them to any flaws and how can one determine whether posters' motivation to support something, perhaps irrrationally, is because they have a personal/commercial interest in a product or subject or maybe they are just willing to go along with their peer group and try to be accepted by them?
I'll avoid mentioning i- or Linux, to keep this subject off people's pre-determined opinions, so how's about this example, where a remarkable degree of support is shown for an apparently faulty product:
http://www.thumpertalk.com/forum/showthread.php?t=957323
So, which posters there, if any, are sock-puppeting? Or do you agree it is really very hard to tell?
I find that a lot of people on forums are incredibly sour and simply enjoy playing devils advocate. At which point groupthink kicks in and you get a crowd of people flat out insulting someone right to their face for having the nerve, the cheek, to speak up about a bad experience they had with a particular product.
It starts off with simple "well clearly you are doing it wrong" posts and spirals into all out death threats and vicious personal attacks. Luckily if they are shills, they are only giving their product a bad name by making their customers look like assholes.
In summary, people love to pile in on a good scrum. Facts become irrelevant after the first page, when the OP starts trying to rephrase their complaint in a less ambiguous way, stating clearly that they were not at fault, the other posters simply start making shit up in order to get a rise.
>>"you know, I'm fairly certain that assuming a false identity in order to obtain a financial advantage or to secure goods/services is called fraud in the UK...."
Given what people seem to get away with without being done for fraud, I'm not sure someone would be likely to get done for writing a puff piece or slagging off a competitor under an assumed name.
In reality, the bad publicity from being dumb enough to get caught may well be more of a deterrent than the slim possibility of other action.
"No matter the advances in artificial intelligence over the years, “real” people remain good at identifying fakes. If you watch even a couple of contentious hashtags – in Australia, #nbn (the hashtag Aussies use to discuss the National Broadband Network) will do as an example – the auto-Tweets stand out as if lit by neon."
Some posters have already expressed doubts about this statement - but they're beside the point. It's fallacious on its surface. Without an independent test to confirm which tweets are machine-generated, you can't demonstrate an absence of undetected machine-generated tweets in a given sample; and you can never demonstrate an absence of them in general.
What's sometimes called "Shannon's method" (in Natural Language Processing) - training a language model, then using it as a generator and trying to gauge the verisimilitude of the output - has always suffered from this problem. You can evaluate that output subjectively, in which case it's anyone's call; or heuristically (like your suggestion that "watch this" links are giveaways), in which case it's probabilistic; or by evaluating against another model, which is easier to automate and captures more information, but is still probabilistic, and requires you have a better model to check it against.