Re: can someone explain...
Ah yes - I'd forgotten you could use tail recursion as well as mapping a function onto a list in these things.
86 posts • joined 17 May 2011
Suppose I want to get an input n times and each time I get it I then test it's value and then choose what to go and do on the basis of this value before returning and getting the next input. How do I do this in a language like Bosque?
[This isn't some cynical trick Reg-comments-esqe question - I don't know the answer and I'd genuinely like someone to explain it to me.]
I have some audience tapes I made of the good old Grateful Dead in Paris in 1981 (I think - I was a touch out of it then and have a touch of grey now, so memory isn't that great). I'm waiting for the French authorities to pop over on the basis of their amazing powers of deduction etc. Is this seriously down to 'Blues for Allah'? Jesus these people are mad.
Wow. Good for IBM. I'm interested, as an academic who studies vision, in assessing the importance of potential low-level statistical texture descriptors in object or face recognition. I've just filled in the form asking for access to the database.
People who want to do evil things with coded images will already have done it if they are really any good at doing their evil things.
I wasn't noticing enormous differences (or being enormously impressed) until we got to the food pix of the full-English. The Apple one is pretty good. The other two (both Huawei?) are appallingly bad. Food pix seem a pretty good test of image quality because for the food to look appetising you need to capture colour variation, texture, and highlights accurately. I'd hazard these are also pretty important for other natural materials like skin. I'd be surprised if the phone that took the bad food pix could produce good close-ups of faces.
The 50x zoom pics are a joke (but I expect everyone with half a brain must have figured that one out). I can get fairly decent ridiculous telephoto images with a tripod and an 800mm lens on a 35mm full frame format digital body with decent lenses and a tripod (actually, for me, 400mm lens and 2x teleconverter - all Canon stuff with red lines around the lenses, all very old (e.g. the body is a 1Ds Mk 2 - less than £300), all very e-bay) but even then it is hard (I was doing this for work - it isn't a focal length I'd normally use for fun). I'm surprised you can get anything sensible hand-held from a phone (congrats for managing to at least get the centre of the postbox sign, if only to demonstrate the lousy image quality).
Had a VPN for ages for non-pron reasons. Mine costs a trivial amount (less than a pound a week - NordVPN) and was easy to set up and use. Surprises me that everyone doesn't have one. You can get local news from countries that block it otherwise, in reverse get UK-only sites when abroad, get US media services, and so on.
As someone who is considered by some people to be worth giving a fair whack of money to to dispose of on kit as I like here is my take on high end laptops. I like to have exactly the same work environment wherever I am. In the office I have some lovely 4K HDR screens. I can plug my laptop into these through a nice box (and also connect to a big bit of spinning rust for backups) and have a great desktop experience (also via a wireless keyboard and mouse). If I unplug my laptop for the road then the work environment is the same but now seen through a pretty (very) good laptop screen. Software, data, operating system, everything, is exactly the same. I don't have to worry about internet access or any of that hideous roaming profiles crap to ensure this. I now do this with one of those overpriced Macbook pros, which I prefer to Thinkpad X1s that I used to have. Whichever you choose, the laptop powerful enough to create a fab desktop experience and be well enough equipped to work well on its own is a great thing. It makes work reliable and consistent. I don't care whether I can upgrade it or not. When I want a new one I will get a new one.
I replaced my old X1 with a Macbook Pro (mainly for reasons to do with having to have a managed desktop and no admin rights on Windows, whereas free to do what I want with MacOS or Linux - already have Linux machines for rendering etc.) and I'm pretty happy. The keyboard on an X1 is undeniably better, but the distance between the keyboard and the front edge of the case was painful for me. I did stick with Lenovo a bit though - got the first version of the Thunderbolt 3 dock, which is an excellent thing - from laptop to dual monitor desktop with multiple external disks, by plugging in a single cable.
My great great grandfather was an engineer. He emigrated to Australia along with the steam engine he was employed (for the rest of his working life) to keep running. Being an engineer didn't used to mean being able to design a steam engine from scratch, it used to mean being able to keep it going. I'm heavily indebted to BG engineers who can keep my central heating going. Something that, as a scientist, is of course, completely beyond me.
We need a reference for the flatworm claim. I've taught the flatworm story for years. The 'memory' is just stress hormones (generated when the 'donor' flatworm has to learn a stressful task) which affect the rate at which regenerated (or cannibal) flatworms learn a task. Please take a look at Frank, Stein & Rosen (1970) 169, 339-402. It is a fabulous debunking of 'chemical memory'.
"village of Garston" - unbelievable! I grew up in Watford. Garston isn't a village, its the bit of north Watford that is near(wish) junction 6 of the M1. It is horrible. I'm not surprised that a nerd from Garston is also a psychopath who enjoys killing small animals. Probably more fun than any entertainment Garston offers. (ps I haven't lived in Watford for decades but I know it very well)
The candela is a measure of human perceived brightness. As people's perception of the brightness of a light source depends on the wavelength of the light (we are most sensitive to wavelengths corresponding to green (around 550nm) and less efficient to shorter and longer wavelength) the definition of the candela has to include a multiplier representing the relative luminous efficiency as a function of wavelength - this function is known as V-lambda. The definition of V-lambda is based on people subjectively matching the brightness of lights with different wavelengths. The standard V-lambda defined by the CIE (Commission Internationale de l'Éclairage) is, in fact, based on measurements from a very small number of observers. There are different versions of V-lambda for daylight adapted (photoptic) vision, dark adapted (scotopic) vision, and the intermediate state - mesopic vision. The candela is defined in terms of the physical power of a light over an illuminated area at a single standard wavelength (and so is entirely physical at that wavelength), but to use the candela as a measurement of luminous intensity at any other wavelength one has to use a subjectively defined multiplier from the appropriate V-lambda. All pretty weirdly subjective for an SI base-unit!
(I study human vision for a living.)
I remember thinking the original 680x0 to PowerPC transition was going to be a disaster but it was actually much smoother than I'd expected. I was programming research stuff at the time as well as using a Mac for day to day office things. I switched to Windows a few years later (my university stopped supporting Macs and hence they became a bit tricky to buy). I'm currently thinking of switching back twenty something years on....
I remember reading an article about ring laser gyroscopes in Scientific American in the 1970s - noteworthy because some of the photos were censored (little black bars over parts of the image) for security reasons. RLGs, fibre-optic gyros etc. must have come on a bit since then. Do they use optical or mechanical gyros on Hubble???
This stuff is far more interesting than 'AI' - in inverted commas because, as the comments above say, 'AI' isn't simulating intelligence. Hannah Fry was right - current AI is stats. In the 80s some statisticians pointed out that back-error propagation in multilayer perceptrons (the de-rigeur AI of the time) was just an implementation of a statistical procedure called projection pursuit (super dooper regression).
Aiming for more realistic goals than intelligence, such as learning to behave efficiently in a complex environment (as the hand people were doing) is, to me, an intellectually much more interesting goal. Reinforcement learning with temporal discounting (pioneered by Rich Sutton and Andy Barto in the 80s, to much less fanfare than the back-prop business) certainly looks like the way to achieve success (and the way that we and animals likely do it) especially when combined with a system that learns to optimise the representation of the environment from which variables are entered into the learning system (I think I wrote something saying this in the 80s, after which I gave up biologically relevant neurocomputation stuff and did real human neuropsychology - working with patients instread).
I've got one of those adaptive function bars on my old(ish) Thinkpad X1 Carbon. It is ergonomically dreadful (it works as intended but I can't understand why anyone would think working like that was a good idea). Lenovo saw the light and got rid of it as soon as they could with the next generation of X1.
Said it before and I'll say it again. I have had two iPhones, a 3GS and a 6. The 6 is still my phone. The 3GS never broke but eventually it wouldn't run some stuff I wanted. Both had batteries replaced but that is all. For a small complicated thing that is used constantly to last 5 years seems pretty good quality to me. BTW, not a fanboi - I absolutely hated the Macs I had to program in the 80s and 90s, but I would give them credit for the phones being good (unless you feel the need to buy a new phone every year, in which case you are a sucker, whatever kind of phone it is).
I got some Carolina Reapers from Tesco's as well. Not nearly as evil as I'd imagined. I ate half of one raw (sliced into little bits, not eaten in one go) to see what it was like and it wasn't painful, just very, very hot. Nice fruity taste when chopped and cooked in homemade refried black beans. I may have developed a tolerance to chillis - been eating them very regularly for decades.
I had a 3GS which I kept until it stopped working reliably. I then got a 6 which is still going fine (after a new battery, I think the 3GS had 3 batteries in the end) and which I'll keep until it starts causing problems. I think these work out pretty well in terms of pounds per year for phones that I find easy to use and do everything I ask of them. I do know people who change phones every year. That, to my mind, is what is stupid, whether they are constantly upgrading iPhones, Samsungs, or even cheapo android.
I used to have some HD25s which were fantastic until they got nicked. I now have some HD26s - the sound is as good and they are more comfy. These are both stunning headphones given their indestructibility and are great for aeroplanes - fair amount of sound isolation without any noise-cancelling gubbins (also designed for monitoring and so pretty neutral in terms of colouring sound). I have some cheap second hand Audeze headphones (OK, only comparatively cheap - but no more than new Bose QCs) for non-travel use. I remember being impressed by Bose Triports until I heard them back to back with headphones half the price. That, unfortunately, set me on the track of realising that good headphones were a thing (and, in the end, need headphone amps - Oppo do good cheap portable headphone amps).
May as well reply (possibly to some of the replies). I have an amazingly tiny Sony RX1Rii compact camera with a full frame sensor and a lovely Zeiss lens. It takes better photos than any phone and most DSLRs (reminder, proper German Zeiss lens, 42 MPix full 35mm sensor) and can be taken everywhere when you remember. I also have an ancient Canon 1DS Mk2 (full fame sensor again) which takes better pictures than any phone and to which you can attach loads of very expensive lenses (with red rings for preference) and which can be taken most places if you can lug it around (really, really does still take fab pics though). Even with these choices I still end up taking some pictures with my phone - reasonable image quality on a phone is still a good thing to have even if you fell (like me) for all this proper camera stuff.
Absolutely. Every year I look to see whether it looks worth updating my phone. It took from the 3GS to the 6 for the first upgrade. I think the 6 will keep going for a few years yet. The only thing that I think the X could do that I'd like is an augmented reality version of 'on foot' turn by turn directions. Not really worth it for the enormous amount of money. Of course if they got rid of the appalling Lightning connector and went back to something that actually stayed connected, I'd buy such an iPhone like a shot!
Oh I agree. There is no doubt that Nim could communicate, he just wasn't using what Chomsky (after whom he was slightly scurrilously named) would call language. For Nim, 'Nim eat orange' and 'orange eat Nim' meant the same. He was using signs, but not constructing them into sentences where grammar (word order) allows multiple meaning to be communicated with a limited number of signs.
The 'language' the bots produce is, at least on the surface, pretty similar to the 'language' that chimpanzees and bonobos produced in experiments to teach them language (using sign language or symbol boards) in the 1970s and 80s. Herb Terrace reports that the longest sentence produce by the chimp Nim was "Give orange me give eat orange me eat orange give me eat orange give me you." This is probably less sophisticated than the AIs - at least the number of 'me's in their sentences communicate something. All Nim (and Washoe and all the rest) seem to do is produce the words with no meaning attached to word order or repetition.
I don't understand. A Titan-X has nearly 4,000 cuda cores. A DGX-1 V100 has about 40,000 cuda cores. A Titan-X costs about £1,000, a DGX-1 costs about £100,000. Are these things to limited by transfer rates between cards that the 10 fold increase is price per core is worth it? I thought in a neural net architecture you could process data on sets of layers independently and only needed to transfer data across the connections at the top and bottom layers of each set? I am genuinely puzzled. Can someone tell me if these nets work really differently to the multilayer back-prop I know of old and why the DGX-1 costs so much compared to the Titan-X?
Yours, an academic who did NN stuff in the 1980s and 90s using such parallel compute monsters as the 16 CPU Encore Multimax!
Biting the hand that feeds IT © 1998–2019