A new apparatus called the Reflectance Transformation Imaging (RTI) System for Ancient Documentary Artefacts developed by the University of Southampton has brought Oxford boffins closer to deciphering the world's oldest unreadable alphabet. Described in this paper (PDF), the RTI machine comprises an off-the-shelf Nikon D3X DSLR …
That's a pretty neat idea, but it makes me wonder why they haven't done laser scanning. Presumably it's either too expensive or not high-enough resolution, but I'm not sure what kind of resolution laser scanners can get now, or what size features they're after (presumably 'pretty small').
I actually used a wildly-simplified kind of technique like this when trying to (crudely) look for a broken trace on a PCB; a few different angles swapped back and forth in Photoshop make it much easier to piece together details than a single image, even if that image -is- high quality. The transitions make things jump.
I have access to some LEDs here. If anyone wants to send me a D3X I'll be happy to experiment and report on my findings...
This technique reveals details that are not "seen" with a single light source.
See here for a image showing results of the technique - http://www.hpl.hp.com/research/ptm/
There was a programme on the BBC recently, showing that they have scanned the Anitkythera Mechanism in the same way and revealed previously unnoticed detail.
... well, several years ago archaeologists pioneered the technique of scanning ancient tablets with a laser, and then rendering the resulting virtual model as if it were made of chrome. This minimised distracting surface coloration, and made every little detail stand out by moving the virtual light source.
Though not mentioned explicitly in this article, my assumption is that the camera remains still whilst a sequence of shots is taken, the tablet illuminated in turn by each of the LEDs. Again, this serves to highlight every scratch, much as we might examine a physical object for scratches by rotating it in our hands near a bright light.
@ Full Mental Jacket
Cheers, nice link!
Can someone do the Moon please.
Very interesting article and it even gives links to download the program. Might be possible for at least the central part of the moon. I'm going to try it myself but my practical abilities are minimal.
Re: Can someone do the Moon please.
Well, the camera shouldn't be a problem, but the array of lights surrounding it on all sides might be somewhat of a challenge...
Re: Can someone do the Moon please.
Partial array of lights around the moon should be easy: use a single source of light and move it (or move the moon), taking photos with the light in different positions (or the moon in different orientations). If you like, you could then stitch your photos together to make a time lapse video. Why not google "moon time lapse" to find out if anyone has already tried doing this using an existing light source.
More to the point ...
... can the research be used to teach commentards to proofread before posting their typ(e)os? To say nothing of the ElReg authors, towhit:
"Reg readers who paid attention in Classics classes will Cuneiform was a system of writing used in ancient Sumeria"
Beer, because proofreading large quantities of text requires it ;-)
Re: More to the point ...
If you're gonna correct the article, you might as well correct the title: "yields" should be replaced by "may yield" -- no breakthrough yet, just a setup with better images.
You're still dealing with a rather mysterious, short-lived society; there's about 3000 of those incomprehensible tablets lying around with no multi-linguage ones (think Rosetta Stone), so scanning a few at the Louvre is a great step but just a next step -- not a guaranteed solution.
This is one of the rare occasions of a sci/tech story where the BBC News writeup was clearly superior.
Re: More to the point ...
Ahem. I think you meant "to wit".
Re: I think you meant...
I think you added an extra 'o' and ' ', although to be honest, I've never been able to fat finger that combination myself.
Top image: Learned scholars devise means of reading ancient Ubaid tablets by light from 76 candles placed in different positions.
Bottom image: Bag of flour, packet of nuts, small jar of milk.
I'm pretty certain that your translation is wrong.
From my knowledge of proto-Elamite*, the top image translates as "You are in a maze of twisty little passages, all alike".
The bottom image translates as "You have been eaten by a grue"
*Mine's the one with the "proto-Elamite for beginners" tablet in the pocket.
Re: TBottom image
"May contain nuts"
Re: TBottom image
Undoubtedly this was attached to a packet of peanuts
I for one salute our Achaemenid Persian overlords. I'll get my coat.
I bet it's porn. If not, rule 34 will soon apply anyways.
"Dear lord. It IS obvious."
The bottom one is the EULA.
This is nothing new
Ok, the LED's rather than flashbulbs are new. HP developed this technique years ago and it was used to study the Antikythera mechanism found off of Greece.
I don't think the idea of multiple images is new...
but the ability to mix those images to give moveable shaded views (presumably in real time) probably is. Indeed, pick the right two groups of images and you have a stereoscopic pair - a victorian idea, but let's not cavil. I'd assume that a stereoscopic image of very high resolution which can be moved is as good as - or, given the available detail, better than - having the tablet in your hand.
And it's harder to drop it and break it.
With regard to decoding - one of the things that drops out of OCR work is that it's much more reliable to guess at an unknown character when you are sure of those characters adjacent to it (that is, if you have a character which might be a 'b' or an 'h', knowing it is preceded by 't' and followed by 'e' makes that guess a lot better). I can't say - not understanding cuneiform - but I'd expect even a language written on clay to work in similar ways. Though if it is 'one symbol per word' then context from adjacent words is also required, of course.
Re: I don't think the idea of multiple images is new...
"Indeed, pick the right two groups of images and you have a stereoscopic pair"
Not in this case; the camera doesn't move. Looking at a different image with each eye would be wacky, but it wouldn't be 'stereo' in the sense that your eyes resolve 3D images.
And nearly 30 years ago I did 3D imaging with a single image and structured light. It would have worked with most of those tablets.
Surely it must be the earliest recorded Apple patent!
Notice that top tablet has a rounded corner, not so funny now is it.
The second tablet is mainly based around "order black cloth, invent roll neck..."
Down-voted for mentioning "rounded corners" in an indescribeably tedious attempt to be witty, that stopped being even remotely amusing after about the thousandth time someone did it... which even then was probably a couple of thousand identical postings before your own sparkling effort.
The little facepalm dude next to your post has rounded corners. Prepare for the lawsuit.
Amazing! Fantastic! Astounding!
Developed by Southampton university
So it was developed by Southampton Uni, who passed it over for Oxford to switch it on and use it, with the same analytical skills they clearly have. I think that Southampton seem to have missed out a little here, happily handing over their idea and device.
Re: Developed by Southampton university
I expect because of their name, Oxford can get a loan of the tablets. Although it might be safer for the tablets, to send the camera and LEDs to the tablets.
Can this technique or something like it be applied in 3D to medical imaging?
Re: CT Scans?
Re: CT Scans?
Nope. Its for opaque materials, as it works by exaggerating shadows.
Clay, however, degrades over time
Damn. I might as well give up drafting that patent for my new hard disc technology, then.
I translated that tablet with my eyes
It reads.... Gr8 m8 c u lol
50 shades of clay.
Well played. May your death be relatively swift and painless.
I can't read your handwriting. 0/10
Must try harder.
All that effort...
I bet it turns out to be an elaborate pigeon breeding manual.
Only a matter of time...
..before I can download my new proto-Elamite.TTF font file, then.
I can't help but wonder how much better it would be if the camera were in focus, or failing that, not attached to the neck of a rubber chicken. Whatever the problem, that sample image is >seriously< not sharp.
Repeat after me - tripod, remote release, mirror lockup.
The only other option I can think of is that they shot at something like f/256 to get it all in focus, and lost all their detail to diffraction as a result. In which case, it's time to google "z-stacking".
Can it be used.......
......to decipher doctor's handwriting?
- Fee fie Firefox: Mozilla's lawyers probe Dell over browser install charge
- Did Apple's iOS make you physically SICK? Try swallowing version 7.1
- Pics Indestructible Death Stars blow up planets using glowing KILL RAY
- Neil Young touts MP3 player that's no Piece of Crap
- Review Distro diaspora: Four flavours of Ubuntu unpacked