Re: A good pro-bono opportunity for Google
If Google hasn't already done it.
Remember the news when they were logging everyone's WiFi access points with the cars that shot the Google street view photos.
83 posts • joined 24 May 2017
What you describe is tinkering with electronics. Don't worry, it won't be outlawed.
I don't know the type of designs you have worked with. My recollection of electronic engineering at university includes solving equations that didn't fit onto a single side of a piece of paper.
From my personal experience of getting an engineering degree and working as an engineer, this can mostly be assessed by evaluating your designs, and ability to troubleshoot problems. As the person gets closer to a professional engineering qualification, looking at the projects they have completed while working as an engineer, comparing them to their peers, and maybe interviewing peers who they worked with.
Yes, it's not just about being competent at the core aspect of your job, or thinking you are competent because you haven't been fired yet (but nobody has evaluated your work in detail to quantify your skill).
For many people in this situation they don't know what they don't know, e.g. the ethics; and handling the situation where what your employer tells you to do isn't safe or ethical.
While in this case I agree that this is a sensible outcome, in general I think it shouldn't be fine for anyone to call themselves an engineer. The principle is the same as (medical) doctors.
While not everything (professional) engineers do is life or death, a manor benefit to laws requiring registration is: You need someone qualified and registered to sign off on something significant like a bridge or a new airplane control system. The legal system and professional body surrounding engineering provides a strong incentive and a degree of protection so that the engineer signing off on something important will do an expert and unbiased job.
Otherwise you have a situation like at Facebook and Google where thousands of people with 'engineer' in their titles bowed to their employers wishes and developed large scale devious, manipulative and privacy violating systems, and misled authorities about it.
Professional engineers are in a better (but not perfect) situation to resist this sort of pressure.
I agree. Readers of the Reg have above average understanding of technology.
The average person who wants to stop having their location tracked would expect that "location services" = off will do that for them.
Then there are people on the other side of average. Those unfamiliar with technology, people in the early stages of dementia, people with learning disabilities. How is it OK to mislead and deceive them?
Does the digital industry have a broader cultural problem where they are seen as fair game?
My wife went to EB games and bought the online game Elder Scrolls Online Morrow-wind. After inserting the game disk into the XBox it proceeded to download 76GB of data over the course of a few days before the game could be played, followed by a few more GB of patches and updates.
She decided this is a good game and we should both play it, but we only have one XBox so we will need to buy 2 PC versions of the game (a long story). So after updating Steam we did this. Steam started downloading the game on both PCs. 2 days later it got to 46GB and 62GB downloaded and then restarted the download. We were not impressed by this and switched to downloading to one PC and copying to the other. 3 days and another 76GB later we had it installed and running. Just 2x2 patches to be downloaded and installed after running the game for the first time.
Now we are playing it the data use is small but it took 260GB of downloads over 10 days to get set up.
Most other games are similar.
In conclusion: we use a lot of data for gaming. Are we the only ones?
Well, in 2012 I updated my phone OS and then it suddenly began using cellular data at a rapid rate. Especially when I drove or traveled - when it got hot to the touch.
My suspicion was that the phone was monitoring my movements and activities and sending back this data, and the update had included a bug or sent the information hundreds of times more frequently than intended. Why else would a smartphone burn through so much data while not being used?
I closed all the apps and used the phone sparingly. It made no difference.
In 10 days it had used my full 2GB data allowance.
That's the scenario we often hear about.
This is an equally valid scenario for for a betting business: A former customer of the business is suspected to have experienced gambling addiction when they became depressed in the past. An analysis of sentiment in this persons tweets and facebook posts shows that they are making many negative comments. So lets target them with advertisements to encourage them to gamble.
The technology exists to do this.
Is the business justified in the logic that the customer will probably begin heavy gambling soon and it might as well be with us rather than a competitor, so we should show these adverts?
They are doing many things that are in legal gray areas and a few things that are illegal but can't be proved, such as lying about costs and licensing fees.
Changing the laws isn't a silver bullet.
Enforcing existing laws is another option that would help. Although that would require paying people to do the work, which is politically unpopular. And I think there is little political will to tackle this as long as those companies keep donating to political parties.
Yes and no.
Most of what they do could be argued to be legal (but is in a gray area since it goes against the intent of the law). And they get to keep the money while the legality is debated. And if they are found to owe the money in the end, it is already in a tax haven and impossible to get back.
However most of these schemes involve a step in a secrecy jurisdiction such as in the Caribbean or Switzerland where the ownership or value of something is recorded and hidden, then the value or ownership is falsely declared to non-secret governments (as something else). This is why the Panama papers are so important - they prove that many big companies and wealthy individuals were lying and therefore evading tax rather than avoiding it.
The sad part is the lack of action. Perhaps that is related to 4 prime ministers being revealed as having secret offshore accounts.
I wish I could upvote your comment twice.
PCI is a very strong framework to prevent these types of issues. If major corporations are not following these practices them someone senior there is incompetent or negligent.
There is no need to reinvent the wheel.
Does this mean that all businesses will eventually become fraudulent and criminal?
Banks used to be quite trustworthy (a long time ago), and when you see what has come up in the Australian Royal Commission into banking so far, the misconduct, deception, and fraud is substantial. It got me wondering about dishonesty in business, and I see the inevitability of your answer.
I have finally got mine up and running, and it seems to me that the answer is that WD apps and software attempt to upload the data to the device via the INTERNET.
Hopefully I am wrong, or I can mod this device to just be a NAS. Because I was shocked that a device that is physically in my house and connected to my LAN would work by uploading the files from my PC to a 'who knows how secure' WD cloud drive so that it could be downloaded onto the NAS that is located 1m from the PC.
Is this an NSA sponsored project?
$250,000 per year is not a lot of money to operate a service that is at times critical and people's lives depend on it. Compare this to running air traffic control.
It might be an adequate amount though. So for me it's not clear if the developers and operators of the contract stuffed up, or if such an important service was done without adequate ongoing investment.
As with many government projects, it would be most cost effective to operate a fire warning service nationally, not have an app for each state. That means all the cross-device testing, and upgrading for new devices and OS version updates goes much much further.
Employing people is an important benefit of having businesses in the economy. So is paying tax.
But let's not view employment as a charitable act. The workers have to work in return for their wages. Business shouldn't have to be paid or rewarded through tax breaks to employ people.
That is how the software is designed (compartmentalised).
The issue is that if the file being analysed contains some arcane pattern of data that causes the analyser to crash, then all bets are off about what happens next. That is a common tactic that hackers use on any software.
It is not possible to test the software against every combination of data that could ever be fed to it, so the software would be vulnerable to this risk, as is the case for a lot of software.
Some data and calculations or logic on that data need to be fed into the processor together. There is no way around that.
Are you kidding?
Free press and an independent judiciary are a fundamental requirement of democracy.
Permitting govt/police to spy on journalists kills the free press.
If you need any recent case studies, try Russia, Turkey and Ethiopia.
These are countries that have only recently become totalitarian, and in each case it is clear that neutralising or eliminating the media is necessary to stop democracy.
Google did have some tech innovations such as non-locking distributed data updating. Basically their crawlers updating the search database while large volumes of queries were being run against that data.
Their labs projects were quite innovative too.
They shut most of that down now years ago but their profits continue to grow, their deceitfulness about spying/data collection grows, and their tax avoidance activities flourish.
Google buy the traffic information from Telco providers who give them a data set about the rate at which devices enter and leave each 'cell' in the network. From that and the government supplied CAD files for the road network they can calculate the volume and speed of traffic flowing along each road.
The telcos collect this data for every SIM device that is powered on, which explains why there is so much data.
I know this because the digital map company I worked at was offered the ability to purchase this service before Google back in the day.
Of course they will look through the data cherry picking information.
Every person has confirmation bias, so they will be looking for data which supports their position. Hopefully you have a defense lawyer looking through the data for information that exonerates you.
Because there is nobody whose job it is to look through all the data and form an impartial opinion about what they find.
While these services are designed for honourable purposes, there are still risks: that if cost wasn't a barrier they could be abused for spying and keylogging, that rouge individuals at the vendor or the customer organisation could abuse the data obtained, that the data they obtained could be accessed by hackers, or if a company providing these services was bought by a criminal organisation and systematically abused without the other parties being aware.
What can go wrong can be seen in examples like the advertising syndicates that collect as much personal data as they can and sell it to anybody that they can; and Facebook whose platform was used for unintended malicious uses plus Facebook's greed in doing business with anyone who would pay.
I gather this type of data for my work to improve websites. After years of working in the field I have settled on an approach: I invite user in for a session, ask them to sign a release, observe them in person, and record the screen and sound. I pay them for the effort, and provide a written guarantee that the information will only be used for improving the website.
Using online interaction recording services is an attempt to get the data cheaply, and I don't think it is worthwhile overall. Because it costs more to run online observations, the privacy situation e.g.consent, and the quality of the information is lower than in-person studies. Online tools are only useful for running analysis across large numbers of sessions, or meeting (unecessary) requirements to include hundreds or thousands of people in astudy.
Re: Shower of shites!
"Isn't it getting a bit old trying to change the subject. Every time more evidence is found about Russian collusion all we hear from Big John is "what about the democrats?" "
This is not Whataboutism because no one is being accused of hipocrisy.
The accusation is that the subject is being changed - which is a red herring argument.
At the risk of creating an misleading analogy, that logic of fairness is not applied to most areas of life:
Why aren't organised criminals and ordinary citizens both investigated by the police?
Why don't all employees get the same bonus irrespective of how good they are at their jobs?
Why don't conspiracy theorists and qualified experts get the same amount of airtime?
Because equal treatment and fair treatment are not the same.
Thanks for the link. I bought the book and read it because I had a nagging feeling that mocking Trump and shaming his supporters might be ineffective or even counterproductive.
A fascinating read - I learned a lot and have plenty to mull over about how to interact with people with certain personality types.
Let's hope some effective legislation is put in place to tackle this and they are enforced.
I rate the chance as less than 50%. And the chances are lower in the US than EU.
An effective tactic to sabotage this behavior is to feed bad data into these systems. They have been designed around the principles of obtaining as much data as possible, and just assume that the data they collect is accurate. Then it is freely sold, shared, compiled between the data companies.
If 20% of the data was poisoned this would make intelligence drawn from the data too inaccurate to use. Correlation would produce significant numbers of false positives. The compilers of data wouldn't know which data was bad or when bad data started entering their systems. It would be unfeasible effort and time intensive to clean up the data.
Web tracking cookies, ad tracking, and email tracking are all vulnerable to spamming junk data into the databases.
A botnet would take this to another level with diverse geoip information spamming.
Since algorithms clearly aren't remotely good enough to spot fake news, scams and fake accounts on Facebook, it is clear that actual people need to be hired to do this.
Facebook can afford to do this.
Zuck is probably concerned that the cost of employing people to do this on a long term basis will reduce profitability too much.
But hiring an army of people to do this will have a rapid impact on those who are currently abusing the weaknesses and loopholes in FB. It won't be as easy to adapt their activities to fool people as it has been to fool algorithms.
This will make it too hard for most abusers to keep going because they need a broad reach and low cost to achieve their goals. They can't be as confident they will succeed in that environment.
This will have the effect of driving them to other platforms where policing|moderating is weaker e.g. Twitter. The cost of human oversight for FB then falls, and FB have a lot more examples to feed their AI to create Zuck's dream solution of automated moderation (if that is actually possible).
Employing lots of people might also be good for FB PR, unless Zuck's zeal to eliminate jobs is too strong!
If a competitor does it first, the dodgy operators will move their activities to other platforms e.g. Facebook, making their current problems worse.
It wasn't a survey and the 71 people isn't a sample.
When I run surveys I select a sufficient sample size to get the desired confidence interval and margin of error. When I run user testing I select the number of participants based on a logarithmic curve. There is good literature on the web about this.
An analogy is if a car company builds a new model car and they have some customers test drive it. The first 5 customers all claim that it has bad handling. They test it with another 10 customers who all say the same thing. How many more customers should they test it with to validate this? Do they need to test it with a statistically significant sample of the population of the country, the world?
In reality the company would start to investigate the issue at that point. They would probably retest with some more customers after making improvements, rather than continue to test after the first 15.
A designer presented two alternative design concepts to me for a major project 3 years ago. One design was flat. I had my team test both designs with users and rejected the flat design, and built the other.
The reason I didn't shout this from the rooftops was that I had no reason to help competitors make the right choice.
In terms of the time lost, 22% isn't high. I've just completed some UX testing like this, and found issues where people take 2-4 times longer to complete tasks in areas of the site with bad labeling. That is 200%-400% longer.
Every participant under 60 who I have tested with over the years is short of time, so taking longer to complete a task means they may not have time to do another task, and the risk of an interruption e.g. phone call, increases in proportion to the time taken. An interruption has a high chance of abandonment the task altogether.
Within an organisation, a 22% reduction in employee efficiency while using a website such as the company intranet is very significant.
Ad revenue comes from clicks and views, not how long each ad is visible to the user, so this will reduce advertising revenue.
For any eCommerce website 22% more time consuming will translate in about the same loss of revenue, plus competitors are just a click away, so under performing sites can lose customers to competitors.
This is a sound methodology. 71 users is hugely more than needed to come to a solid conclusion. There is not enough space to go into it all here, but UX Matters tackle the sample/validity question in "Studies for problem discovery" section of http://bit.ly/2wDVkrc
User testing is not Marketing hocum. In my experience Marketing managers can get angry when they receive the results of user testing as it cramps their style, debunks assumptions, and creates a high and measurable bar for design quality where traditionally there was no accountability. If it was easy to manipulate, they would do that rather than get angry.
On the other hand in my experience, devs more often claim that the methodology is invalid due to statistical sample size requirements. But this type of study is comparable to testing software for bugs. You don't need thousands of testers to find the major bugs. More than 3 is a waste of money because the extra bugs they find will be minimal.
If you work in the web industry and think UX techniques are unreliable, you are missing out on an important tool to built quality UI.
I didn't say that fact checking would result in more left wing ideology.
As per my earlier post, facts and ideology are different things.
People are always going to disagree or argue on ideology to some extent.
Getting into more detail where plans are proposed and relevant facts reviewed is a different activity.
Mixing the two leads to serious issues when facts don't align with ideologies, so they are dismissed, ignored, or alternative-facts are used.
My personal opinion is that ideologies are not going to achieve anything alone, and they are a 1st world indulgence that distracts humanity from solving real problems: unemployment, disease, crime, etc
Biting the hand that feeds IT © 1998–2019