Finally, sensible fashion choices coming back
My take on this is that clothing from my childhood is about to make a comeback.
There is no better feeling than walking the streets incognito.
42 posts • joined 23 Jul 2014
"... he remained confident that his way of doing business was going to make it very hard for anyone to prove a conspiracy – in which he was proven right"
This has the clear fingerprints of the Illuminati who, as is well known, are the undisputed masters of conspiracies without proof.
There is a tremendous difference between If User.sex == 'female' then Display('cosmetics') and Match(User.interests, available_ads).
The first explicitly targets 100% of women (well, those identifying as 'female').
The second targets those who have expressed interest in things like: hiding skin blemishes, eyeshadow, comparison between Maybelline and Revlon mascara, skin toning, etc. It does NOT select 100% of women, only those who expressed interest in those sorts of things. In like manner, it does not ignore 100% of men, because it selects those who expressed interest in those sorts of things. Now, if 10% of women expressed interest and 0.1% of men expressed interest, then you would get women representing 90% of the target audience.
Checking the gender of El Reg readers algorithmicly and replacing the job advert with one for cosmetics if you think I'm female - immoral, stupid, and arguably illegal.
I agree, this would be illegal, but the paper isn't saying that coders have written code along the lines of:
if User.sex == 'female' then Display('cosmetics')
What the paper is arguing is that when code like this:
returns 'cosmetics' it has done so correctly based on the user's interests, but when you look at the aggregate set of users for whom 'cosmetics' was returned, you notice that it is predominantly women. Since this 'clustering' is higher than you would expect from a random sampling of users, the algorithm is inadvertently discriminating towards women (or against non-women). Since sex discrimination is illegal, this algorithmic bias is "illegal".
Consider this from a different angle. If the algorithm predominantly showed ads for Romulan Ale to people who are Star Trek fans, but not Star Wars fans, you could make the same argument that it discriminates against Star Wars fans (or discriminates toward Star Trek fans) when choosing to place ads for Romulan Ale. If Star Trek and Star Wars fanship was a protected category, this would be "illegal".
The paper lays out the following:
"Intuitively, the goal is to show ads that particular users are likely to engage with, even in cases where the advertiser does not know a priori which users are most receptive to their message. To accomplish this, the platforms build extensive user interest profiles and track ad performance to understand how different users interact with different ads. This historical data is then used to steer future ads towards those users who are most likely to be interested in them, and to users like them."
Which is exactly what you would expect effective advertising to do: target based on individual interests.
However, the paper then shifts from the individual to the group with the following argument:
"However, in doing so, the platforms may inadvertently cause ads to deliver primarily to a skewed subgroup ... if these “valuable” user demo-graphics are strongly correlated with protected classes, it could lead to discriminatory ad delivery"
So, if the targeted individuals could be strongly correlated with a protected class, then it leads to "discriminatory" targeting. As the paper continues:
For example, ads targeting the same audience but that include a creative that would stereotypically be of the most interest to men (e.g., bodybuilding) can deliver to over 80% men, and those that include a creative that would stereotypically be of the the most interest to women (e.g., cosmetics) can deliver to over 90% women. Similarly, ads referring to cultural content stereotypically of most interest to black users (e.g., hip-hop) can deliver to over 85% black users, and those referring to content stereotypically of interest to white users (e.g., country music) can deliver to over 80% white users"
Which, to me, suggests the algorithms are working correctly, the problem is people engaging in "stereotypical" patterns of behaviour which the algorithms are picking up on.
The problem with LGBTI is that it is not inclusive enough.
You might consider: LGBTQQIP2SAA or LGGBDTTTIQQAAPP or even LGBTIQCAPGNGFNBA. But even they don't capture the full range - hence the little + that is often appended at the end of these acronyms to signify, "While we haven't added you to our list, we consider you a vital and cherished member of our group - except if you are male, especially if you are a cis male, doubly especially if you are white cis male."
I'm going to engage in a little armchair pedantry based on the video (which I saw) and the not the paper (which I didn't read).
When it was doing addition, the prompt was a single blue spot and the possible answers were: 2 blue spots or 5 blue spots. Or (from what I infer the bee to be seeing): region with few blue spots and region with many blue spots.
When it was doing subtraction, the prompt was 5 yellow triangles and the possible answers were: 4 yellow triangles or 2 yellow triangles. Again, a region with many triangles and a region with few triangles.
The bee didn't have to do any arithmetic, it only had to match similar levels of complexity. In my mind, it seems closer to a comparator circuit than an ALU.
Wow! Who is voting down the historically accurate singular use of they / them / their / theirs / themself.
Is it grammar nazis or SJW nazis - both of which seek to deny the historic use of 'they' as a non-gender singular term.
It is easily seen in Chaucer's Canterbury Tales (c 1400):
And whoso fyndeth hym out of swich blame,
They wol come up and offre on Goddes name
And whoever finds himself out of such blame,
They will come up and offer in God's name
This reveals interesting insight into the behaviour of "neural net" image classifiers.
It is a given that the networks have no "understanding" of what they are classifying. The wisdom being that there is no need to understand - simply fling enough images at it and it will "learn" how to correctly classify cats (if you don't like cats, substitute motorcycles, mountains, tumours, people, whatever).
We now see that these classifiers are not learning what a "cat" is, rather they are learning the types of images in which cats appear - in other words: cat in a context. Change the context and it mis-classifies.
The "obvious" solution seems to be that the neural nets need to segment images into distinct objects and then classify the objects. This is not a trivial problem.
Seeing how the comments section of El Reg appears over represented with cis white males, thus creating a toxic commenting environment for remaining 92% of the world's population*, I propose:
1) cis white male comments shall comprise no more than 2% of all comments in discussion threads - this is to allow for a more diverse flow of comments, as well it applies some positive discrimination to redress all past injustices.
2) No cis white male shall be allowed to leave the first comment. Not only would this violate the first proposal, it would also set a toxic environment for other commenters thus discouraging them from leaving a comment.
3) No cis white male comment shall be given prominence over other comments. All cis white male comments must appear at the end of the discussion thread - preferably, requiring secondary authentication each time someone may wish to view them.
4) All comments not made by cis white males shall only have an up vote icon since there is no need to down vote diversity comments.
5) All comments made by cis white males shall only have a down vote icon since there is nothing they could say that could require up voting.
*white population has been roughly estimated as Europeans + North American + Australia = ~18%
** white male population is ~50% of white population = ~9%
*** cis white male population excludes homosexual, trans, queer, etc = ~8%
It should be obvious from these numbers that cis white males are the majority oppressors.
> Unfortunately, there are victims to this stategy but it is for the long-term and greater good as all these things tend to be.
An laudable ethic that was used to great effect in the Soviet Union, China, North Korea, Cambodia, etc.
Trample the the rights of one and you trample on the rights of everyone. Or, as Marcus Aurelius put it almost 2000 years ago, "What is not good for the hive, is not good for the bee."
If we want to end discrimination, then we must end it, not introduce new forms of it.
I found Ian Hickson's comment interesting:
' W3C "is an organization supported by large annual fees from large companies, and its primary organizational goal is to ensure these companies remain as paying members."'
Who are these unnamed large companies (which are not Microsoft, Google, Apple, and Mozilla - ok, Mozilla's not that big) that are driving the W3C?
Men and women earn the same pay per unit of work. Some units of work are worth more than others.
(1) Those who complete more units of work earn more than those who complete fewer units of work.
(2) Those who have completed more units of work have learned where to find more lucrative units of work and, naturally, go after them. This is down to experience - not gender.
However, what does break down according to gender are:
(1) Women complete fewer units of work than men and therefore earn less. If they completed the same number of work units, they would earn the same.
(2) Women, having less experience, are not as aware of higher paying units of work. If they worked more, gained more experience, they will learn where the more profitable units of work are and go after them.
For the average user this is the best possible solution.
A typical user is far more likely to consider they can count on their phone for, let's say, 8 hours, than consider "I can get full performance for 6 hours, but after that I am without a phone".
I agree that Apple should have been more transparent about what it is doing and allow users the option to change change how the phone behaves - after all, some people are going to insist on maximum frame rates for Candy Crush, rather than insisting their phone lasts the whole day.
When chipmakers started getting down to the really small scales (sub 28nm), correspondence between scale and technology name desynchronized.
22, 20, 16, 14, 10, 7, 5, and 3nm are generation / technology names (keeping consistent with previous generation naming), but no longer connected to feature sizes like gate length, etc. In other words: they no longer mean what they used to.
When we went from 28nm to 22nm, transistors stayed pretty much the same size, what happened was they got rotated vertically 90 degrees - so instead of lying flat on the surface, they stood vertically on their edge.
I think all that truthfully said about Shiva Ayyadurai's claim is:
In the late 70s he developed an electronic office communication system which he called EMAIL.
He applied a skeumorphic transformation of real-world concepts (In Box, Out Box, CC, BCC, etc) to his system.
However, I don't think he can truthfully claim to have invented THE email we use today - which evolved from a completely different set of technologies.
He can claim he invented AN email system, but not THE email system (especially since electronic communications predates his invention).
At best, his was an independently conceived and implemented evolutionary dead end, which probably saw itself as a closed, rather than open, system.
Perhaps the only thing he can truly be credited for is EMAIL (unless there is evidence of earlier use of the term).
The article was a fascinating read and now makes Roger Penrose's notion of a cyclic conformal cosmology seem more probable: we are at the end of the universe, the last moments, nothing remains except for the last evaporating black hole, which goes out in a huge, brilliant outrush of radiation and a new universe begins.
Icon choice: closest thing to a big bang.
Normal lensing effects would only be visible if we could "walk" around the lens noting the changes. Or if the lens moves between us and the object of interest.
Looking galaxy pairs 4.5 x 10^9 light years away seems to seems to preclude any possible use of one of those methods.
So the scientists have come up with "weak lensing". Essentially (if I correctly understand),
(1) we assume a uniformity across the galaxy pairs ("a spherical cow of uniform density") - except that they are not.
(2) if we sample the pairs and average them out then any differences (variance / noise) will be statistically reduced, leaving us with nice clean galaxy pair data.
(3) this nice clean galaxy pair data can them be compared against "real" galaxy pairs.
(4) this comparison will result in a difference.
(5) this difference will be attributed to gravitational lensing.
I will admit the theory behind weak lensing seems solid. Nevertheless, it doesn't sit well with my gut feeling which is saying that if you manipulate enough data and are looking for something, however weak, you are going to find it.
So if I had to ask a question (or two) about this, it would be: "Why couldn't the natural gravity between a pair of galaxies be responsible for the artefact observed instead of invoking a filament of dark matter?" And "Why should there be filaments of dark matter bridging galaxies?"
Hobo icon because I am not feeling terribly smart about all this.
I'm glad I'm not the only one who noticed that.
By definition, a warrant is some sort of authorization. In this case, it is judicial authorization to invade the privacy of a citizen in connection with a presumed criminal offense for which there is no other means of securing evidence.
Had the court simply said "You have no legal authorization to challenge a legally administered warrant." and left it at that, then there wouldn't be a problem. However, adding that the only party legally entitled to challenge the warrant is not permitted to be advised of the warrant is a non sequitur.
(1) Google already automatically classifies images, so it is reasonable to assume they would try to leverage / reuse their image classifiers.
(2) Since video is simply a bunch of still images, it is reasonable to assumed Google simply takes stills from the video and passes it to their existing (and trained) image classifiers.
(3) It is pointless to process every-single-frame in the video because that would be prohibitively expensive and there really isn't much change from frame to frame.
(4) Google, probably, selects only the key frames (I-frames) for classification. (Depending on computation cost, Google may drop key frames if they are similar to other key frames - why classify the same, or very similar image, over and over again. Of course, this depends on whether image classification is more expensive than comparing two key frames).
(5) It should be obvious that every inserted image is an I-frame, so it WILL be classified.
(6) Google has some algorithm (or neural net) that tries to boil down the contents of a video (several thousands or millions of images) into a single classification. Clearly, if you have film walking about in a city, you will have cars, buildings, people, trees, etc. Google's classifier has to come back with a single answer. This is, probably, weighted by the confidence of the original classifications. Cars, buildings, laptops, and food plates probably have a higher level of classification confidence.
I imagine that Google will, over time, tweak the final classification to give more weight to duration of a single classification rather than confidence of classification (or perhaps some admixture of the two).
On the other hand, I could be blowing smoke since I have absolutely no idea how Google is doing this, but is the way I would approach it.
(A pint, because we could spend hours arguing over how this is or should be implemented - or even if it should be implemented at all.)
Of the names mentioned, I agree that Chiwetal Ejiofor has probably exhibited the Bond like performance in the Firefly movie (or Bond villain, for that matter).
Although, I can see Matt Smith being Bond - though, more in a parody sort of way. I think he would definitely trump Rowan Atkinson's Johnny English.
For the average person, for the average use, "more or less accurate" is good enough.
For most people, if the standard of measure (i.e. meter, gram, ampere) they are using is out by a few parts in a million, they probably won't notice, but for scientists trying to perform accurate (and reproducible) measurements, it matters how accurate, stable, and verifiable that measure is.
I would rather think it is more akin to "newspeak" and "doublethink" where you believe as true what is told to you by the "authorities" (in this case "scientists") - whether or not it changes from day to day (or decade to decade).
How can people say, "See, this is how science is supposed to work" when the previous results and research are all batshit from the researchers arses? Ah! It is because today's "facts" and "research" overturn yesterday's truths.
I think it would be better if, rather than pretending it is fact / truth / knowledge, we all say instead, "I BELIEVE!" with the proviso: "My beliefs are subject to change / adaptation / maturation as I see fit."
Like Frumious, I easily keep 200+ tabs open. His explanation is brilliant of what is going on in my mind (and my browsing habits) and he has some wonderful browser suggestions.
Those tabs can usually be broken down into clumps of 15-20 tabs on a particular topic, this means that 10 to 15 different (or possibly related topics) going at the same time. It is not hard to remember those topics, nor is it hard to remember the fanout under a particular topic. Though, I will grant that remembering all of them at simultaneously would be a stretch for me.
If I could propose another browser optimizations: we should be able to group a collection of tabs into a "tab group" and clicking on the tab group would expand the 15 or so tabs I have open under it. This would make the number of visible tabs much more manageable.
As with Frumious, I find bookmarks not really usable. Sometimes I use them, but mostly they are a pain.
The problem I see with quantum computing is that everything I read about it (I have not read any "official" papers though) always sounds the same (in style and substance) with those inventions that claim to be revolutionary in energy use / production. So, I read about it with a very sceptical eye.
As I understand it (from the popular press), the magic of quantum computing lies in it evaluating all possible solutions simultaneously and popping out the correct answer blazingly fast. Now, for me, the question is "how do you code something so that all possibilities are evaluated simultaneously?" It seems obvious to me that if you are coding all the permutations, then, obviously, you are also coding the answer in. Once the quantum haze (or should that be foam?) dissipates, presto, you have the right answer. But ... it would seem that the quantum computer had no choice but to reveal the correct answer, since all other answers would be "unstable" (so to speak) and thus collapse away.
As I said, I don't pretend to understand this one bit ... which is why the WTF? icon.
Which is why Paris for being a little flip. Nevertheless ...
From the official requirements doc:
"The inverter will be tested using a near ideal voltage supply set at 450 V. This power supply will
be floating. Its positive terminal will be connected to a 10 Ω wire wound resistor which will in turn
be connected to the positive DC input terminal of the inverter. The voltage source will be very
close to ripple free."
Efficiency is computed as:
"The inverter must demonstrate an efficiency of > 95 %. The efficiency is defined as:
Efficiency = DC Power Input / AC Power Output
and will be determined by measuring the input voltage and current and output voltage and
current, using the real component of the power at the fundamental frequency."
I argue that they do not specify on which side of the resister the are measuring the DC Power Input.
Input is 450V through a 10 ohm resistor for an output power of 2kVA.
Doing a rough equivalence of 2kVA = 2kW (yes, I know this is not necessarily true)
We have an input side current of 4.4444... amps.
Power dissipated by resistor = 197W.
This is already close to 10% of the 2kW.
Hence, the project cannot achieve its stated efficiency of >95%
Biting the hand that feeds IT © 1998–2019