This is getting beyond contemptible now.
This should be programming 101.
It reflects so badly on the rest of us who dedicate our time to learning and doing things correctly.
A Brit biz whose mobile apps monitor the mental state of 35,000 British schoolchildren is having to rewrite them after researchers found hardcoded login credentials within. "Tracking steering biases is a pioneering technique developed by STEER using AI to identify patterns of bias linked to mental health risks in 10,000 test …
Sloppy coding and deployments are run of the mill now, despite there being an abundance of security checklists for such common requirements as how to 'harden' response headers for Varnish or Cloudflare caching or how to set up SSL certificates. Lots of companies (TCS, Accenture to name just 2) often fail dismally at checking to see if a deployment from Test/UAT to production has removed development credentials etc.
This is usually coupled with a reluctance to pay the extortionate fees required for full penetration testing.
And becomes the managers just "Want stuff done now" for "Market and/or release" "we have to hit this madeup target" despite being told time and time again it won't look good if there is then a security issue. "Fuck security. Just get it done. I want my bonus for hitting bullshit targets". And because those bonuses aren't then taken back when a massive security issue is revealed, it will continue.
I wouldn't be too surprised if the "third party developers" were the ones giving money to Steer.
I mean, there's a smorgasbord of monetizable data, either by selling it to governments, or to criminals or to multinationals. For some unknown reason the separation between these three categories seems to blurry noticeably every month. Sigh...
If privacy and security were absolute priorities, then this wouldn't have happened.
I'm a pentester and it reminds me of a client that believed it wasn't possible to develop secure software, and the only possible method of making something secure was to give it to the pentesters once complete. He was moaning at the fact we kept finding things on each retest, therefore we should only test the exact things found last time so that they could get a clean report.
And YES, if professional pentesters reveal only a fraction of issues at a time, so that they are certain to find and bill new ones for the next audit, it does not help to establish trust and respect with the profession, and it certainly discourages business.
Of course never on purpose... However, you will probably know that pentesting engagements are extremely limited in time and scope. Therefore, if we find say xss in a few locations we report those with examples, but won't go around the site finding every single instance. The client with access to their own code base is better positioned to do that.
Time is better spent finding other vulnerabilities imo. Better a report that finds 10 different things than 1 thing but highlights every instance. Often these vulnerabilities are found manually rather than by automated scanners.
Pentesting engagements are on average 4 days including report, so not everything can be uncovered in that time, especially on a system resembling Swiss Cheese. We don't know how much we'll find until the testing window starts you see.
The problem being if you report systemic XSS and give one or two examples, the client typically fixes the examples and ignores the wording to check the rest of their code and implement something robust. In many cases, the fixes will also be very poor - for instance i've seen a report where the example was a typical alert box containing the string "XSS", their "solution" was to check for that exact injection string.
Plus you get other "fixes" where people completely fail to understand basic security concepts, so you find a bug like xss or whatever - their "solution" is to encrypt the form data in javascript first because encryption is the answer to everything... Nevermind that the attacker controls the client and can therefore encrypt whatever payload they want too.
I'm sure they do. Even state-funded secondary schools have some arrangement in place. But sometimes children won't go to see them (percieved stigma with their peers or they don't get on with the person available) and sometimes the school counsellor will sign them off but then some weeks later there is a problem again but nobody knew about it. But there is no reason anybody should suspect what's going on if a child is fiddling with a phone. An app would provide another way of alerting somebody about a child that's struggling.
A quick look at the STEER web site suggests a school makes the app available to all pupils and can then identify those that may be at risk of mental health problems. From experience I know that the sooner things like eating disorders and self-harm can be treated, the easier they are to deal with. When my daughter was being treated for an eating disorder and we informed school they were very surprised that she was having a problem - teenagers (and, it seems, particularly girls) can be very good at masking these things.
Anon for what I hope are obvious reasons
“Cognitive-affective heuristic biasing contributes to successful navigation of epistemically varied tasks in secondary school”
“Implications are posited for how we understand the relationship between errorful knowing and wise action”
“Containing the emotional dysregulation”
"Data privacy and security are Steer's absolute priority"
Yes, now it is, because you realize just how much your reputation is fucked. That said, you apparently only had the realization after El Reg had to shove it up your nose.
There is no excuse for hard-coding credentials in an application and I don't care that the account has been disabled. Those credentials should never had been coded in the first place.
I will STEER well away from your applications in the future.
With lots of comments on the failed security, I took a minimal look at the information on the program itself. It advertises itself as being able to track the social environment of the entire school and identify at a glance pupils who need help. Users mostly rated this app with one star on the proprietary marketplace for Android, and the sole five star review is obviously written by a mature adult, as opposed to a user.
I've spent a lifetime in "health", from before questionnaires for mental health became fashionable, and I am well aware of the failures of the format when trying to assess mental health.
I'm still tempted to download it and trial those passwords and see what's inside, because it sounds like "1984".
I wouldn't bet on the latter either. Just because even fewer people understand it than who understand basic sanity in software development doesn't mean that there is some inherent robustness to it.
"Date require (...) algorithm to interpret it" - not this again. Something like my XLS files need a separately stored Excel to interpret them?
I did try and implement a 'security from the bottom up' system. You couldn't access any data/app that could access that
without permission, the permission being devised by the owners of the data etc.
Turned out management didnt actually want any security really - more interested in finding out what other parts of the company were doing well so they could absorb them and claim their commission cut.