Come and do a CASE PhD in multimodal imaging of artwork with us.

We’re very excited to have a 4-year CASE PhD studentship available to develop a multimodal imaging system which is designed for imaging artwork, especially paintings. The formal adverts have been posted online and instructions on how to apply are available there. Here, we’re able to give a more informal and hopefully helpful look at why we’re so excited by this project.

First, we’ve had a lot of investment in imaging for heritage recently and the student working on this project will have access to new hyperspectral imaging cameras (bought as part of an equipment grant from AHRC this year) and a scanning X-ray fluorescence system (bought just before lockdown and still waiting to be installed). The hyperspectral imaging cameras are able to generate images where each pixel contains the full spectrum from 380 nm to 900 nm (using one camera) and then from 900 nm to 2500 nm (using another camera). This lets us identify and separate different pigments and can also tell us about underdrawings and so on. The X-ray fluorescence system gives complementary information about the elemental composition. So together, we get complementary information with the X-ray fluorescence system telling us about the elements and the hyperspectral cameras telling us about the chemical composition. By analysing both together, we get better information than using the systems separately.

So we need someone to lead on this multimodal imaging project. You’d become our expert on using these two systems, and develop methods for aligning images taken on the two systems (handling the different resolutions and alignments). This requires an understanding of both the instrumentation and the maths of image registration. This is a research question, which is potentially publishable, but we have a good starting point and have a good idea of how to do this. It would be a great project for your first year to get you comfortable with using the systems and with the image processing.

You’ll probably need to develop some well-controlled test materials so that we can characterise the performance of the system, but we do have an extensive reference collection and access to other imaging and analysis methods for comparison (e.g. microscopy), together with historic samples from UCL Special Collections (paper, parchment and plastics), the Petrie Museum (ceramics) and other partners. We are also part of EU consortiums such as IPERION HS and E-RIHS, bringing opportunities for international collaborations.

This is our imaging frame, supplied by ClydeHSI, which is used for imaging paintings.

Measuring the immediate properties of the material (reflectivity, absorption etc) is important and can sometimes give us the information we seek, but perhaps the most novel part of this project is to move beyond the straightforward parameters and instead develop way to generate images of the derived properties that are actually of direct interest to conservators, historians, archivists etc such as chemical composition, acidity and degree of polymerisation and even physical parameters such as stress and strain. Doing this at high spatial resolution will allow us to detect leaching of chemicals, acid damage and mechanical damage non-invasively with unprecedented detail. This will require the development of new methods which could be based on classical statistical methods such as linear regression or on machine learning approaches.

The general idea of processing images to provide secondary parameters which are close to the interest of the end-user is a hot topic in imaging and has applications throughout science, not just in heritage imaging.

As a CASE studentship, this project is closely aligned with the interests of our industrial partner, ClydeHSI, who supplied the hyperspectral imaging cameras and the motorised support frame. We have worked together on previous successful PhD projects and both we and ClydeHSI see this PhD project as an important part of our growing collaboration. You will have the opportunity to visit ClydeHSI and learn about imaging from a commercial point of view.

This is an example of our hyperspectral imaging system, when we had the opportunity to image a painting called La Ghirlandata by Rossetti before it was cleaned

We don’t want to be too prescriptive as to the kind of student we’re looking for in this project. Four years is a long time and is enough to develop new skills. You’re likely to have a background in a quantitative field such as maths, physics, computer science, engineering or chemistry. The project covers many areas of science and we don’t expect anyone to have all the skill we’re looking for at the start of the project. The kinds of areas we’d like a student to come with experience of include computer programming and image analysis, perhaps with machine learning, chemistry especially spectroscopy, and instrumentation development. But mainly we’re looking for someone who is curious, keen to learn and enthusiastic to get involved with this and other projects going on.

We can also mount our cameras on this 3D robot arm for imaging 3D artwork such as sculptures

Recording a video

We’re all becoming experts in video editing and are being overwhelmed by advice on how to do it. Here’s some more.

Well, it’s not advice, it’s my own notes put online on the off-chance it’s helpful to anyone. My motivation for this workflow is that (1) I’m old-school and like to do things locally on my computer so I can see what’s going on; (2) I want to have a bit more flexibility than just talking to screen, or talking ove powerpoint; (3) Every so often I get stuck with video so thought it was time to put some thought into it.

I had a go with lecturecast and echo360 and that worked OK for simple stuff, but I wanted more flexibility. I saw Mark Handley’s method and found that a bit overwhelming. This seems to be a compromise that works for me. One bonus is that all the software is free. And I run it all on Windows.

I guess this really should be a video, but I find it easier to have a page of notes at the side of me while I’m recording the video to remind me what to do, so this is the blog version of that (see also point 1 above). There are no guarantees that any of this is optimal or that these are the best settings, but they seem to work for me.

1. Preparation

Like DIY, if you get your preparation right, everything else is a lot easier. Decide what you’re going to say, assemble any powerpoints, images, other material. It’s worth doing a script if the video is short, but if it’s longer than a few minutes, I don’t bother. Remember you can to repeat a bit if you make a mess.

In the pre-covid Olden Days, most of my teaching was based on slides, either in powerpoint or pdf. I’m making use of these, either by talking over them or by saving the slides as .png and mixing them into the video. If you’re going to go that, make sure the screen size is the same as the video resolution (Design > Slide Size).

Have a look at your background and make sure there’s nothing there you don’t want to be there. You might need to move your chair or move other things around in your room. Adjust the lighting if you can – my desk faces the window and can be quite bright. I either partly draw the blinds or wait for an an overcast day when I get a better recording.

I bought a gaming headset – it was the cheapest around – which has a microphone, Note – having microphone recording on a headset means that the 3.5mm audio jack has four terminals at the end; headsets with speakers only have three. My laptop can only cope with a three-terminal audio jack so I couldn’t record audio and I had to get an audio to USB converter. Moral – buy a USB headset. I can record while wearing the headset, though that’s a bit annoying, so I generally prop it up in front of me, just below the camera view. My laptop webcam seems fine, but the microphone is noisy especially when the fan comes on, so an external microphone is a lot better.

2. Record the video

Following Mark Handley’s advice I use OBS Studio for recording. You can select various sources and swap between them farily easily, as well as recording the main screen, with a little thumbnail video of me talking off to one side, which I think it better than plain slides.

Start by installing and opening up OBS Studio. In the bottom “Sources” box, make sure you have your video and audio inputs selected. Select Display Capture if you want to record a screen. Select your audio source and turn off “Desktop Audio” in Audio Mixer. Your chosen feed should have a green bar that moves as you talk. You can also record the screen (with or without your little thumbnail video in the corner), which is useful if you want to demonstrate software.

Double check the background and lighting.

Adjust the red box around the camera image so it’s the right size.

I don’t think I changed any of the OBS defaults, but it might be worth checking “Settings” at bottom right. I selected “.mp4” as an output under Settings > Output > Recording > Recording Format

Open the slides, script or whatever you intend to say and click “Start Recording” on the right hand side. Leave a good 5 seconds or so at the start and end before you talk and after you finish to leave space for editing. If you’re going to record a long video, it might be sensible to record a few seconds first and make sure everything’s OK (I have found many, many ways to forget to record the sound).

OBS will save the video to its default location which is C:\users\username\Videos. Find your videos, delete any that failed, check them over, looking at recording quality and sync between video and audio. Mine currently saves as mp4 and as .mkv. I delete the .mkv.

3. Mixing the video

I use Blender for video editing. I’m sure there are other, better video editors available but I used Blender a bit for 3D rendering ages ago and thought I’d stick with it. There are a lot of online tutorials on how to do video editing in Blender. The online help files with Blender aren’t very good.

Select “Video Editing” from the splash screen, or click the little plus sign on the top or select File > New > Video Editing. In the top left box, navigate to where OBS saved your videos. Drag a video from the top left box to the Sequencer timeline at the bottom. You should see horizontal bands representing the audio and the video. You can also drag video and other material in directly from Windows Explorer, which is nice.

You’ll only see a few seconds of your video on the Sequencer. Zooming is a bit weird. You can move along the time axis by holding the mouse button down and dragging left to right, and zoom by holding control and doing the same. Adjust it so that you can see the whole of your video in the Sequencer.

It’s worth selecting the video (blue) and audio (green) and pressing control-G to group them to try to make sure they stay synchronised.

I see a vertical blue line at the start and a faint narrow grey vertical band to the right of that. For some reason, Blender seems to default to only working with the time period within that narrow band. To expand the band, select the audio and video, then click on “View” just above the Sequencer strip and select Range > Set Frame Range to Strips and you should see that grey band expand to include your whole timeline. You might need to do this a few times as you edit your video. I don’t know why the “Frame range” should ever be different from the length of the video.

Now you can add other video, photographs and so on by finding them and dragging them into the timeline. There are a lot of options which are worth playing with. For example, I tend to create a slide in ppt and have that as the opening five seconds before fading to the video. Right clicking on the timeline gives options for editing (splitting a video to cut bits out, fading and so on). Keep saving the project.

Click on sound bar. Adjust > Sound > Display waveform is quite handy.

Now save your project.

Now save it as a video – render it. Look at “output properties” in the top right. Under “output” select FFmpeg video file format with MPEG-4 container and H.264 codec (in the options at the right hand side of “Encoding”, “h264 in MP4” seems OK). Under Audio, select MP3. Note that much of the online advice is to rended each frame to png and then reassemble the video afterwards. That might be best, but life’s too short.

Now render. This will create a full-size, high resolution video and might take a long time. Blender saves it to c:\tmp. Check it, especially for lip sync.

4. Compressing your video

I follow Steve Rowett’s advice and use Handbrake for compressing the video. However, I found his compression settings a bit harsh. These seem to work for me.

I start by choosing the preset > Choose Preset > Vimeo YouTube 720p30. Once you’ve saved your own preset options, you can just choose that.

The changes I make to the standard settings are:

  • Dimensions > Change height to 480 and keep aspect ratio
  • Dimensions > Set cropping to custom and zero
  • Video > framerate to 25
  • Video > quality > constant quality to 24 (works for me, but you might want to change this)
  • Audio > Bitrate > 64. Keep as stereo. There’s not much difference.

These compress to about 20 % of the original filesize and end up giving about 2-3 MB per minute. It’s worth saving your settings as a Preset. Note that you can load a sequence of videos into a queue which can save time.

Now you can upload this into your echo360 library and from there into Moodle. It’s worth spending some time admiring the poetry of the automatic transcriptions.

Have fun! I hope this is useful.

Kelmscott school reading lists

Kelmscott School has reading lists for each year and some of the older books are available as free e-books which you should be able to download and read on pretty much any screen. Here are the ones I’ve been able to find, mainly on the excellent Project Gutenberg which has over 60,000 free e-books. You can probably get borrow of the others as ebooks for free if you have a Waltham Forest library card.DSC_0839a

Project Gutenberg follows US Copyright laws and generally includes books that are more than 95 years old, so published before 1925. I think the next book on the reading list to become freely available will be All Quiet on Western Front which was published in 1929.

If you read and enjoy any of these books (or any others), why not create a blog and review them?

Year 7

Here are the suggested books

These are online:

I can’t find the others online, but here are the titles:

  • The Graveyard by Neil Gaiman
  • The Garbage King by Elizabeth Laird
  • Wonder by R J Palacio
  • Watership Down by Richard Adams
  • The Wolves of Willoughby Close by Joan Aiken
  • The Dark is Rising by Susan Cooper
  • The Cay by Theodore Taylor
  • The Indian in the Cupboard Trilogy By Lynne Reid Banks
  • The Seeing Stone By Kevin Crossley-Holland
  • Boy and Going Solo By Roald Dahl
  • The Weirdstone of Brisingasmen By Alan Garner
  • Across the Barricades By John Lingard [I think I remember reading this in school]
  • Tug of War By Catherine Forde
  • War Horse By Michael Morpurgo
  • Stone Cold By Robert Swindells
  • Northern Lights By Philip Pullman [also The Subtle Knife, and The Amber Spyglass].
  • Cue for Treason By Geoffrey Trease
  • The Eagle of the Ninth By Rosemary Sutcliffe
  • The Secret Diary of Adrian Mole Aged 13 ¾ By Sue Townsend
  • Raptor By Paul Zindel
  • The Sword in the Stone By T.H White
  • The Hunger Games By Suzanne Collins [and the others in the series]
  • Pig Heart By Malorie Blackman
  • Framed By Frank Cottrell Boyce
  • The Book Thief By Marcus Zusak [I reckon this is a bit tough for year 7]
  • The Box of Delights By John Mansfield
  • Percy Jackson and the Lighting Thief By Rick Riordan
  • Holes By Louis Sachar
  • Twelve Minutes to Midnight By Christopher Edge

Year 8

Here are the suggested books

These are online:

I can’t find the others online, but here are the titles:

  • Madam Doubtfire By Anne Fine
  • The Outsiders By S E Hinton
  • Chinese Cinderella By Adeline Yen Mah
  • Coram Boy By Jamila Gavin
  • The Curious Incident of The Dog in the Night-Time By Mark Haddon
  • Flambards By K M Peyton
  • The Day of the Triffids by John Wyndham [all his books are great, if scary. Try the Midwich Cuckoos]
  • Google-eyes By Anne Fine
  • The Flame Trees of Thika: Memories of an African Childhood By Elspeth Huxley
  • The Hitchhikers Guide to the Galaxy By Douglas Adams
  • Small Steps By Louis Sachar
  • How I Live Now By Meg Rosoff
  • I am David By Anne Holm
  • A Kestrel for a Knave By Barry Hines [this one and the one before are heartbreaking. Don’t read them one after the other]
  • Journey to the River Sea By Eva Ibbotson
  • The Tulip Touch By Anne Fine
  • Looking for JJ By Anne Cassidy
  • The Plague Dogs By Richard Adams
  • Lord of the Flies By William Golding
  • The Woman in Black By Susan Hill
  • Of Mice and Men By John Steinbeck [another sad one. What is it about year 8?]
  • The Giver By Lois Lowry
  • The Dam Busters By Paul Brickhill
  • 1984 By George Orwell [year 10 Virtual Reading Programme book]

Year 9

Here are the suggested books

These are online:

I can’t find the others online, but here are the titles:

  • Are you There God? Its me, Margaret by Judy Blume
  • Hatchet by Gary Paulsen
  • Twilight Saga by Stephenie Meyer
  • Noah Can’t Even by Simon James Green
  • The Mosquito Coast by Paul Theroux
  • The Amnesia clinic by James Scudamore
  • Brave New World by Aldous Huxley [a vision of the future like 1984, but different]
  • Brighton Rock by Graham Greene
  • Shakespeare by Bill Bryson [reminds me – what about books by this guy?]
  • One Day in the Life of Ivan Denisovich by Alexander Solzhenitsyn
  • The Old Man and the Sea by Ernest Hemingway [a really good and very short read]
  • Catch 22 by Joseph Heller [this is bonkers]
  • Collected Poems by Philip Larkin
  • the Catcher in the Rye by J D Salinger
  • Empire of the Sun by J G Ballard
  • The Remains of the Day by Kazuo Ishiguro
  • Scoop by Evelyn Waugh
  • Paddy Clarke Ha Ha Ha by Roddy Doyle

Year 10

Here are the suggested books

These are online:

I can’t find the others online, but here are the titles:

  • A Monster Calls by Patrick Ness
  • Every Day by David Levithan
  • Paper Butterflies by Lisa Heathfield
  • A Separate Peace by John Knowles
  • American Gods by Neil Gaiman [I want to read this one]
  • Never Let Me Go by Kazuo Ishiguro [this is good]
  • Rebecca by Daphne Du Maurier [bit slow, but pretty good]
  • Atonement by Ian McEwan
  • The Grapes of Wrath by John Steinbeck [good]
  • The Wasp Factory by Iain Banks [very good]
  • Do Androids Dream of Electric Sheep by Philip K. Dick [this was the basis for the film Blade Runner]
  • Long Walk to Freedom by Nelson Mandela
  • The Road by Cormac McCarthy [good, but bleak]
  • All Quiet on Western Front by Erich Maria Remarque [German view of WW1]
  • Tinker Tailor Soldier Spy by John le Carre [I like his other books but struggle to get on with this one. I find The Spy who Came in from the Cold an easier read]
  • Schindler’s Ark by Thomas Keneally [what is it with year 10? Are there no happy books?]
  • The Fellowship of the Ring by J.R.R. Tolkien [this goes on a bit. I’d start with The Hobbit and if you find you like reading about little people with hairy feet getting lost, then start on The Lord of the Rings]
  • Oranges are not the Only Fruit by Jeanette Winterson

Year 11

Here are the suggested books

These are online:

I can’t find the others online, but here are the titles:

  • A Game of Thrones by George R.R.Martin [violent and gory]
  • Birdsong by Sebastian Faulks
  • Looking For Alaska by John Green
  • The Kite Runner by Khaled Hosseini
  • Maus by Art Spiegelman
  • Fahrenheit 451 by Ray Bradbury [Bit mad. It’s the Year 9 Virtual Reading Programme book]
  • Dune by Frank Herbert [long, but good]
  • Murdertrending by Gretchen McNeil [best title]
  • Carrie by Stephen King [violent and gory]
  • A Short History of Nearly Everything by Bill Bryson [an easy to ready science book]
  • Cosmos by Carl Sagan [another easy to read science book]

3D modelling

 

Since the excellent 3D imaging in Cultural Heritage meeting at the British Museum, I’ve fancied having a go at 3D modelling and finding myself with some time on my hands due to the ongoing strike action, I thought I’d have a go. Here are some notes in case anyone else wants a try.

I downloaded the trial version of Agisoft Photoscan (you need to request a code which activates the professional version for 30 days) and followed the instructions in this tutorial which was very helpful.

My first attempt was this headrest from Northern Kenya. I can’t remember exactly where I got it, but it looks like it was produced for tourists in the style of headrests used by Turkana, Samburu and Pokot people.

(These Sketchfab models don’t seem to work for some people on some browsers. If you’re one, I’m sorry – best I can suggest is try again with another browser. They do work, honest!)

The model is pretty good for a first attempt. The carving on the top is clear and the texture is very good. There’s a bit which hasn’t quite rendered properly underneath the top surface and the wire, used for carrying the headrest, hasn’t really worked. For this, I used a Nikon D3300 with a 18-55 mm VR II lens at the 55 mm setting, and took 67 photos, each as 2992 x 2000 pixel jpegs. I could use higher resolution RAW files, as recommended, but I wanted to be confident I could do the rendering on my laptop. The most time-consuming step was drawing the mask round the photos. The “Magic Wand” and “Intelligent Scissors” tools which are provided are good, but struggled to cope with the clutter of my lounge!

3d1.png

One of 67 photos with the mask outlining the headrest.

My second attempt was another similar headrest.

Same camera, same process, but I ended up with 58 photos this time. It’s rendered pretty well, especially the leather strap which I thought might have struggled. The top isn’t decorated so much on this one, but the wood grain comes out really nicely.

Again, making the mask took about an hour and a half, so I tried seeing what would happen with a crude mask. When I masked all the images roughly, as in the photo below, there was some irregularities under the top of the headrest, but it’s not bad.

 

crudemask.jpg

3D rendering after crude mask applied

crudemask2

Crude mask around the object

The rendering was surprisingly good even when I didn’t mask the images at all.

nomask.jpg

No mask at all

So if the mask doesn’t make too much difference, how much difference does reducing the number of photos make? If I go back to the fully masked photos, but remove half of them, I get this with 29 photos. The curved top edge is clearly irregular. The rest of it is pretty good though.

halfphotos

29 photos. Some irregularity on top edge.

Now down to 15 photos. At last! It’s failed – there’s a hole under the top surface, presumably because that area wasn’t imaged by the remaining views. The whole thing seems to be surprisingly robust though.

quarterphotos

15 cameras. Big holes.

Finally, I tried my favourite headrest which I’d left until last because the finish is a lot better and it has leather tassels which I expected would be difficult. I was right…

This needed careful and really time consuming masks and then lots of editing by hand to separate the tassels. My editing ended up removing some points from the surface so the point cloud had holes in. If I enabled interpolation when meshing, it rendered the tassels well but left holes in the mesh. The model above is with extrapolation which fills in the holes but doesn’t handle the tassels too well. I reckon this is about as good as it can be with my current set-up. My next improvement would be a turntable with a black background and offset flash so I can try to take many more photos and avoid having to mask them by hand.

So – that was fun. The fairly crude set-up – saving jpegs and using my dining table as a studio gave reasonable results and it looks like the algorithms used by PhotoScan are pretty forgiving.

 

 

Imaging the first printed edition of Euclid’s Elements

My last piece of work of 2017 was a treat, and worth resurrecting this blog for. Some time ago, Tabitha Tuckett in UCL Special Collections and I had a stimulating chat which sparked a handful of potentially intriguing projects. Today’s was to see whether we could find any interesting features associated with the printed diagrams in the margins of the first printed edition of Euclid’s Elements.

Euclid wrote the Elements in about 300BC and it effectively defined mathematics for the next two thousand years. He wrote it in Greek (some of it is probably a compilation of earlier works) and it was translated into Latin but it eventually became lost to Western Europe. Fortunately, it was translated into Arabic and found its way to a remarkable monk called Adelard of Bath who translated it (possibly three times) and other scientific works into Latin in the twelfth century.

The Gutenberg Bible was printed in the 1450s, but the earliest printing techniques were unable to reproduce the diagrams which were needed to accompany the text in the Elements. These are mainly lines, circles and arcs of circles. Text was printed with metal type, dropped capitals were added with carved wooden blocks and then the rubricator came and painted on the red (‘rubric’) highlights (I learnt a lot today!). That was all standard printing practice by the 1480s, but in the dedication to the first printed edition of the Elements, the printer (a chap called Erhardt Ratdolt) says that for the first time, he’d worked out how to print diagrams but wasn’t going to say how he did it. His methods are still unknown. It is believed that he used strips of metal which, as a printer, he would have had in his workshop, bent them to shape and embedded them in a supporting matrix such as wax. UCL Special Collections have a copy of the 1482 first edition (and the 1491 second edition and 400 others, many of which have been digitised).

The question that Tabitha posed was whether we could detect and image depressions in the page which resulted from the printing process which might give clues as to the methods used to print the diagram.

We decided to use Optical Coherence Tomography. Peter Munro at UCL Medical Physics and Biomedical Engineering has just bought a new system and we were keen to try it out. OCT has been one of the biggest successes of biomedical optics in the last thirty years, having gone from a lab invention in the early 1990s to a standard test for imaging the retina and other parts of the body. It uses near infrared light, so it’s safe, and splits it into two arms using a mirror. One arm is used as a reference and the other illuminates a sample. If light on its way to the sample and back travels the same distance as light going along the reference arm and back, the two beams of light add together and appear bright. However, if light penetrates the sample, reflects and then returns, the distance it has travelled is changed and the light no longer adds together. This allows us to build up an image showing where light has interacted most strongly with the sample. The spatial resolution of OCT is ten micrometres or so, and it can penetrate a couple of millimetres into tissue. It’s also being used in heritage applications.

Normally, OCT is used to look for features below the surface, but we know that paper scatters light very strongly so we won’t get much detail of the structure of the paper itself. Here, we’re using it mainly to track any changes in the surface of the paper, which might be due to the printing process.

We first looked at the first page of the Elements (ref UCL SPECIAL COLLECTIONS INCUNABULA QUARTO 5q, which was well looked after by Angela Warren-Thomas, a Senior Conservator at Special Collections). It has diagrams, but also text, a grand dropped capital and glorious ornamentation around it. We examined the diagrams, text and the dropped capital, but couldn’t see any clear indentations.

Optical coherence tomography of the first page of Euclid's Elements

Optical coherence tomography of the first page of Euclid’s Elements

However, when we looked closely at the diagrams, they looked a bit odd. It took me a while to work out why, as I’m now used to looking at perfectly printed diagrams all the time. But back in school when I had to draw diagrams like this, I’d have drawn the circle and then fitted the straight lines to the circle. These had clearly been done the other way round – the straight lines were perfect, but the circles were all over the place. The part of the circumference around C in the photo below is clearly not aligned with the rest of the circle.

DSC_0252a

Close up of a diagram

We wondered whether the first page would have been handled more than later pages, such that any depressions in the page might have softened over time. So we moved to another page (page c1r in the Byzantine and extremely unhelpful page numbering system). Here, we had more success.

OCT images of (from left) a line through a diagram, text printed with metal type and a woodcut ornament.

OCT images of (from left) a line through a diagram, text and a dropped capital.

The top row of images above are OCT images showing a line through a diagram, some text printed with metal type and a woodcut ornamental dropped capital. All these are slices through a 3D volume, with the slice chosen to be close to the surface of the paper. Close examination of the line from the diagram shows a “tramline” effect, where there are intense lines of ink following the edge of the line with less ink in the middle. This is commonly seen in printed ink when the printing process acts to squash the ink away from the middle of the line.

The bottom row of images are OCT images taken roughly along the red lines shown in the row above. The horizontal direction of the image shows distance along the paper, but the vertical direction shows depth into the printed page. The vertical bands are shadows where the ink on the surface has prevented light from penetrating the paper. On the left hand image, showing a line from a diagram, it looks like there is a dip in the surface corresponding to the ink. There is no such dip on the text or the woodcut. We consistently saw this dip in the inked diagrams and not elsewhere. We never saw a dip at a distance from the lines, which could suggest that if the metal strips were held in a frame, the frame was quite some distance from the paper.

We repeated this on the second printed edition of Euclid and curiously didn’t see the dip corresponding to the ink lines on the diagrams. However, it was printed differently, with different paper so it’s hard to compare different books.

We can do more analysis on the OCT data, including quantitative measurements of the depth of the depressions, but our results suggest that Ratdolt did use a different method to print the diagrams and the text. It looks like he at least typeset the straight lines before he typeset the circles and he may have printed them separately. If he did print them with metal strips in a mould, he did it in such a way that the mould did not apply significant pressure to the paper. We also wonder whether the printer of the second edition used a different technique for the diagrams than Ratdolt did for the first.

It’s exciting to think that modern imaging methods might be able to cast light on Ratdolt’s 500 year old printing innovation.

The bite of tyrannosaurus

[Third in a series after Trilobite Optics and Dinosaur Dimensions, based on a guest lecture I give for MPHY2001, Physics of the Human Body]

The world of dinosaur palaentology has transformed since I read dinosaur books around 1980 or so as a kid, compared to now. Maybe the biggest change is the recognition that feathers were common, and that some dinosaurs evolved into birds – even Tyrannosaurus may have had feathers. Another change is that the slow moving, dim-witted dinosaurs in my books were probably active, somewhat warm-blooded creatures. The crests of Triceratops have been reevaluated, and the purpose of Stegasaurus’ spine plates seem to change frequently. The impression is that research had moved from anatomy into physiology: palaentologists are now discovering how dinosaurs behaved, rather than just what they looked like. This is remarkable, given the paucity of fossilised remains from 65 million years ago.

There has been quite a bit of recent research looking at carnivore feeding behaviour by using computer models to estimate their bite strength. We know by looking at existing animals that bite strength can give clues as to feeding habits. For example, the animal with the strongest bite force living today is probably the crocodile, with a bite force of something like 16 kN (humans can barely make 1 kN). A crocodile feeds by waiting in ambush, then biting and gripping their prey, so they need to bite hard to stop the prey from escaping. A great white shark’s bite strength is similar which is not particularly impressive given their size and the size of their prey, but they use their sharp teeth to bite chunks out of their victims rather than to grip and hold on. Hyenas have a famously strong bite as they can crunch bones, but the sabre-toothed tiger is thought to have had a relatively weak bite, presumably because its teeth were somewhat fragile and it killed by making precision puncture wounds.

The force with which an animal can bite is, of course, not necessarily the same as that with which it can open its mouth. Commonly, the muscles used to close the mouth are a lot stronger than those used to open the mouth as anyone who eats toffee knows. The crocodile is a great example of this: its bite force is huge, but the force it can apply to open its mouth is small. A human can hold a crododile’s mouth closed, or it can be held closed with a couple of turns of tape. The photo below was taken in St Lucia Wetland Park in South Africa. I was part of a Yorkshire Schools Exploring Society expedition and we had the opportunity to work with researchers there. We went with the aim of finding emaciated crocodiles as part of a study into pollution in the lake, but ended up finding the largest crocodile ever recorded there at 4.13 m. We tagged and measured him, and let him go. The last thing we did was take the tape off his mouth and run away fast!

Adam

Back to bite strength: I’ve already mixed up measurements (crocodile, hyena) and calculations (shark, sabre-toothed tiger) of bite force strength. We can’t measure bite strength in extinct species (or particularly fierce ones). How do we calculate bite strength, and how can we apply the same methods to predict bite strength in dinosaurs?

We would normally approach a problem like this by simplifying it until we can write down an equation. For example, we could assume that a jaw is a bit like a pair of scissors, and work out how much force there would be at the tip of the blades if we squeeze by a certain amount. We could write down this equation and solve it, but how much would it help us? The jaw isn’t like a pair of scissors and for a realistic calculation, we would need to know the shape and size of the jaw, what it’s made of (a shark has cartilage whereas a crocodile has bone) and what the surrounding muscles are like. We wouldn’t ever be able to write down an equation for a structure as complex as that, let alone solve it. So we turn to a different approach: we use the Finite Element Method. The maths for this was worked out in the fifties, but it only became commonly used once computers were widely available. Effectively, you can take a complicated structure like a bridge or a skull which has a complex shape and different materials with different properties, and chop it up into many tiny pieces (“finite elements”) which are connected into a finite element mesh. The computational problem has now changed from solving one large, complex problem to solving lots and lots of small, easy ones. You can then apply a force to the computational model and see what happens: does the bridge bend; does the car survive the crash; how hard does the crododile bite?

We use this technique in optical tomography to model how light travels through the body. Our software is called Toast. We shine near infrared light onto the body and it is absorbed by different tissues but mainly by blood. If we can work out where the light has been absorbed, then we know where the blood is, and by using light at different colours, we can work out what colour the blood is, which tells us how much oxygen it’s carrying. We’ve used this method to look for brain activity in babies and adults, and to breast cancer. Below is a finite element mesh which we have used to reconstruct images of brain activity in babies.

babymeshThe Finite Element Method has been used to examine bite strength in many animals, but it’s interesting to compare two particular creatures: Allosaurus and Tyrannosaurus. These are superficially similar, large meat-eating dinosaurs, but the results from finite element analysis are really rather different. Allosaurus had a bite strength of only 1-2 kN – somewhere between that of a fox and a wolf, and very disappointing for a big, fierce dinosaur. Its skull, however, was remarkably strong, able to withstand more than 50 kN. Why would its skull be stronger than the muscles attached to it? The most likely explanation is that Allosaurus ate by slashing at its prey with its mouth open, ripping off chunks of flesh, rather like the great white shark. Tyrannosaurus, on the other hand, could deliver a bite force of something like 50 kN, much more like what we would expect from everyone’s favourite dinosaur, and suggesting it could attack even the largest and best defended prey. Such a great bite strength would have enabled it to crunch bone, like the hyena, possibly adding weight to the idea that Tyrannosaurus was a scavenger.

The finite element method is one of the most important tools in a physicist or engineer’s armoury. It provides a elegant way of linking current research in medical physics and biomedical engineering to other exciting and fun areas of science, and even enables us to work out how long-extinct animals may have lived.

Dinosaur dimensions

This is part 2 of three posts on the Physics of Prehistoric Animals. The first is on trilobite optics and the last is on finite element modelling of bite strength.

The main thing we all know about dinosaurs is that they were big. The largest sauropods were 40 m long and weighed 100 tonnes. A blue whale has about twice that mass, but it is supported by water – the sauropods were the largest land animals.

We can use scaling laws to try to understand how huge dinosaurs functioned. First, image a small animal (say a mouse) getting larger. If its length doubles, then its volume and therefore its mass will increase approximately as its length cubed: it will weigh eight times as much. However, the strength of its muscles and bones depend on their cross-sectional area, which only increases with the square of length. Our giant mouse will be twice as long and eight times as heavy, but its legs will be only four times as strong, and it will be unable to move. This is why heavy animals (elephant, hippo, rhino) need to have disproportionately thick legs to support their massive bodies. The concept of scaling laws like this was put forward by Galileo in 1638.

There’s a UCL connection here as well. One of the early popular descriptions of dimensional analysis was by a UCL Professor of Genetics and later of Biometry, J B S Haldane. He did a wide range of research, mainly around theoretical and mathematical biology, which was probably for the best as much of his practical work involved self-experimentation which apparently resulted in various injuries and illnesses. He was also an enthusiastic populariser of science, through articles written for the Daily Worker. He described dimensional analysis in an essay called “On being the Right Size” in 1926, before he came to UCL. In it, he explains why “you can drop a mouse down a thousand-yard mine shaft; and, on arriving at the bottom, it gets a slight shock and walks away … A rat is killed, a man is broken, a horse splashes.” His explanation is that the acceleration force due to gravity increases with the animal’s mass (proportional to length cubed), but the resistive force from the air, which slows it down, increases with its surface area (proportional to length squared). The velocity ends up being proportional to the square root of mass/area, or to the square root of length. A horse which is 10,000 times heavier than a mouse would be twenty times taller, longer and fatter, and so land with five times the speed of the mouse, and therefore would splash. In 1956, Haldane left UCL and moved to India, either in protest at the UK Government’s actions during the Suez Crisis or “to avoid wearing socks”, depending on your source. He died, in 1964, of cancer, and wrote a poem about it which begins “I wish I had the voice of Homer// to sing of rectal carcinoma” and includes some of the very few medical physics-inspired lines of poetry:

They pumped in BaS04.
Till I could really stand no more,
And, when sufficient had been pressed in,
They photographed my large intestine.

Back to dimensional analysis: it can also be applied to temperature control. Heat is generated by metabolism throughout the volume of the body (proportional to length cubed), but is only lost through the body’s surface. Animals in cold environments should therefore minimise their surface area-to-volume ratio by either minimising their surface area (Allen’s rule) or maximising their volume (Bergmann’s Rule). Arctic foxes, for example, are plumper than desert foxes and have small ears whereas desert foxes have huge ears to maximise their surface area for efficient cooling. In this case, the physicist who, when asked to predict milk production, wrote a report beginning “consider a spherical cow…” could have been onto something.

Dinosaurs, being big, had a large mass within which to generate heat, but a relatively small surface area through which to lose it. We tend to think of animals as being either cold-blooded (like the lizard who sits on a rock to warm up in the sun), or warm-blooded (a hummingbird eats its bodyweight in food each day to maintain a body temperature of 42°C). It is likely that the largest dinosaurs were in some intermediate category – they were large enough that their body temperature was substantially warmer than their environment, but they couldn’t actively control it. This suggests that they might have even more active than we might expect, but they wouldn’t need to eat as much as an equivalently-sized warm blooded animal, enabling them to reach such large sizes. Some large sharks have a similar metabolism, partly warm and partly cold blooded. This argument wouldn’t apply to smaller dinosaurs, who would have a relatively larger surface area and therefore lose heat. They therefore evolved feathers and survived the extinction of the dinosaurs by becoming warm-blooded birds.

Some of the most well known large dinosaurs (Triceratops, Stegasaurus, Spinosaurus) had anatomical features which increase their surface area (a neck frill, plates along the spine, or a sail). We don’t know what purpose these features had, but it’s certainly possible that they played a role in temperature regulation. These large, flat features, have a large surface area compared to their volume, so they would gain and lose heat efficiently, in the same way that an elephant’s ears can help to keep it cool.

elephant

An elephant charging me in Kruger National Park. Note the big, floppy ears, ideal for heat exchange and threat display, and the slightly blurry photograph.

Think now of energy rather than temperature. The energy generated by an animal has to escape through its surface, or else the animal will get hotter and hotter. The energy used at rest, called the basal metabolic rate, must, therefore scale with the animal’s surface area, which is the square of its length. Because mass is length cubed, this is the same as saying that the basal metabolic rate scales with mass to the ⅔ power, or equivalently, it is proportional to M. However, if this exponent is measured, it turns out to be a bit larger, and is closer to M¾. Either way, this is remarkably constant. Apart from steps from single-celled animals to cold-blooded animals, and from cold-blooded animals to warm-blooded animals, this relation holds from cells, weighing a tiny fraction of a gram, to whales, over an extraordinary 20 orders of magnitude.

I10-83-metabolicThis is possibly the most extraordinary graph in biology. Think about it. Animals from bacteria to mammoths could barely be more different, but they all follow the same constant scaling laws.

The argument as to why the exponent is ¾ instead of ⅔ is complex and has only recently been worked out. Part of the justification says that the energy is generated on the surfaces of objects within cells, and the total surface area involved is so large and tangled up, the surface actually scales with the volume. Imagine stuffing a bedsheet into a washing machine. Even though the sheet is pretty much a 2D area, the amount we can get into the washing machine depends on the washing machine’s volume rather than the surface area of its drum.

There’s a neat little side-argument here: metabolism is supplied by the volume of blood in a fixed time, so if metabolism scales with M¾, then so must the volume of blood supplied in a fixed time. The volume of blood leaving the heart in a fixed time (proportional to M¾) is equal to the volume of the heart (which is proportional to the volume, or mass, of the body) multiplied by the heart rate, so the heart rate must be proportional to M. A similar argument can be made to show that an organism’s lifespan is proportional to M¼. If we multiply the lifespan and heartrate together, we find that the total number of heartbeats is independent of the animal’s size, and is in fact constant. Hence, remarkably, a straightforward argument from physical principles can show that all animals have approximately the same number of heartbeats in a lifetime, althought it is, of course more complicated than this. This number is about 2 billion, which puts a human lifespan at about 60 years, which is pretty close, particularly given that we have a longer lifespan than we should due to medicine and sanitation etc.

Finally, let us come back to dinosaurs. A recent article proposed that a non-linear curve fitted the graph above slightly better than the simple linear power law we’ve been talking about so far. The proposed curve gets steeper as the animals get larger. At a mass of about 100 tonnes, the curve reaches a gradient of one, where it becomes much harder to provide energy to the body of an animal bigger than this mass. This may give a soft limit to the maximum size that an animal can achieve and still function efficiently. This happens to be about the same size as that of the largest dinosaur, so at last, we can predict, using scaling laws how big the largest dinosaurs could be.