Raspberry Pi meets Cognitive Neuroscience

Can the Raspberri Pi be used to process 4d neuroimaging data?

Can the Raspberri Pi be used to process 4d neuroimaging data?

When I started my masters degree, I didn’t entirely know what I was taking on. I chose to study cognitive neuroscience because I knew this was an area which presently receives a lot of funding for PhD research, and also because it seemed like a robust, scientific approach to psychology. After a few short months I have come to discover that research in this area is something that I really enjoy, and that is largely down to the opportunities to program and develop computerized research tools.

At the same time, the Raspberry Pi Foundation have announced and launched the new Raspberry Pi 2: a credit card sized motherboard which can be run as a standalone personal computer. I’m not going to go in to all the ins and outs of the Raspberry Pi (or Pi, for short), you can find more about it by visiting their website at www.raspberrypi.org. Instead, I want to talk about what it brings to the study of psychology.

Anybody who has studied psychology as a science will know that a lot of research is made up from undergraduate students sitting in dark rooms performing mundane tasks while their reaction times were being measured. This has been terrific for the creators of Matlab, whose IDE has facilitated many a psychology-button-pressing extravaganza. Even so, there is a new kid on the block who is gradually gathering momentum, and that is python.

One of the modules we undertook this spring semester was programming in python, something I hadn’t done before. I was keen to do at least some of this on the Pi, so as to justify my impulsive buying of it. My lecturer found kind of cute (that would be “cute”, except that he didn’t say it aloud). I persisted non the less, until I found that at least for graphics the Pi weren’t altogether compatible with the PsychoPy module we were using. Unfortunately it does not support Open GL graphics, so it actually struggles with button pressing experiments. But it was a start, it wet my appetite, and it was fun.

The real fun has commenced as I’ve begun my dissertation project. We’re doing some work on spatial cognition (how space is represented in the brain) using functional magnetic resonance imaging (fMRI), and part of this involves calculating different test statistics on 3d data on a voxel by voxel basis (voxel = volumetric pixel). There is lots of python modules optimized for flattening, analyzing and reassembling these datasets and we’re using them to build novel analyses which haven’t been done before. On this frontier of neuroimaging research, the raspberry pi stands gallant as my building platform and testing station to produce these scripts.

I’ve been using the module ‘minepy‘ to calculate the Maximal Information Coefficient (MIC) for each voxel in a set of datasets. The modules install seamlessly on the pi from the different repositories, and scripts can be written elegantly through the ‘spyder‘ IDE. I’m most excited most of all about the sheer size of the data we are working with. Each dataset is 64x64x26 voxels, which means 106496 calculations. On the Pi, each one takes about a second, and (when I close the GUI) all in all it takes about 2 hours. For me to run this on all 48 of our scans, that would take about 4 days. For processing the whole lot, fortunately we can pass our script to the cluster computer at York, which (assuming it is coded correctly) should polish it all up in around 16 hours.

It looks like something out of a movie, but its real!

It looks like something out of a movie, but its real!

What I really like about this is how real it all is. I’ve taken on many little projects here and there over my years, but none of them have ever really meant anything. Sure, I learned a lot, but I wanted to put it into action. Here, when those lines of text fly up my screen for 2 hours, I look forward to the output for reasons more than just knowing it worked. It also gives me the opportunity to work on my projects at home using a linux software environment, which great since Python runs natively in Linux.

This also ticks another box for the raspberry pi foundation. Universities and hospitals around the world have powerful workstations and supercomputers which they use to process neuroimaging data. That is very sensible, considering the sheer volumes they work with. But can it also be done on a £25 printed circuit board in Billy-whizz’s basement? Yes… It can!

Advertisements

Research Methods Blogs

icon-36401_640Two years ago at the start of my degree, a portion of our research methods grade was based on writing a bi-weekly blog (and comments) on research methods in psychology. There was some pretty common traps that students starting out on their academic writing fell in to.

Firstly, we had about a thousand blogs on ethics, validity and whether outliers should be removed or not. Because of this it did make me wonder what there was to write about beyond this. Two years on I think I have the answer, and it has been a light bulb moment for me as I contemplate gearing up for a higher academic work load.

Imagine you are a PhD student, and the first phase of your study is to read a whole pile of journal articles which you will then need to understand and use to plan your own research.

If you take your highlighter to it in depth you’ll be reading for far longer than three years. If you read merely to say you did it your recollection will be similar to that of attending lectures. If you want an idea of what results were found, then the abstract and elements of the conclusion often do the job for you. My question here is: what are you actually reading for?

As we blogged for our science of education blog this semester I eventually worked out that there was good marks to be earned by critiquing the evidence already cited by other students. Looking out for things like conflicts of interest or invalid measures. In short, marks came from evaluating the research methods.

I’m seeing the same pattern as I’m starting to write my dissertation. Prior to my academic blogging experiences I was stalling for what to write, anxious about the task ahead of writing a good literature review. But when you see a 2000 word literature review as no more than 10x 200 word critiques (blog comments) the job seems a lot more manageable.

This perspective of examining the research methods gives purpose to the review I am conducting, as it really does set the stage for my own research. It shows the challenges to overcome and the cause/effect relations between different variables and study techniques. And in doing so, your work then carries the general results and findings thus far.

Now I don’t claim for one second that my undergraduate work compares like for like with the work of a PhD student. However undergraduate learning is a taster, modelled around high level academic study, so these ideas seem to take me a step closer to developing professional skills. Academics aren’t there to memorise literature. They definitely aren’t there to copy it either. You’re trying to seek and sense so that you can add new value later on. Considering, evaluating and comparing the research methods looks like a really balanced way to objectively evaluate research to expound on its potential.

#Lecture2013

Die Powerpoint!

Die Powerpoint!

I just did a quick literature search on Google scholar for the word ‘twitter’. It seem that this medium is a relatively unexplored area in the realms of psychology. Tap in ‘twitter education’ and you get plenty about education, but nothing about twitter. One researcher has worked out that twitter gives foreign people an opportunity to practise genuine English conversation, but that was about as interesting as it got.

My project supervisor has mentioned once or twice the idea that an entire lecture slide could be fit into one tweet. The field of cognitive psychology teaches us that the semantic (meaning based) level of learning is by far the best way to learn something, if you actually want to remember it, and so the process of analysis and condensing that information down into such a small string will surely help somebody.

But we don’t merely learn from the type of rehearsal that is merely the robotic re-reading or repetition of someone else’s notes. In short, the person who should be tweeting, is YOU!

Picture a lecture where parallel to the slides (or better still, instead of the slides) is a tweet board. Student’s are invited to bring their iPads, Androids or Windowses and tweet back their semantic interpretation of what is being taught. Picture a lecture who examines the tweets during the break, and uses them to stimulate a discussion during the second half of the lecture.

Picture a class being given a hash tag on the morning of the exam, where they can tweet to each other their revision, so they are effectively teaching one another the content, and comparing their understanding with one another. Picture them asking their questions, and answering each others questions, literally quizzing each other.

The equipment is set up now, the costs for such a thing are nothing. I’m not a teacher yet, but I would be very excited to see this in motion one day.

Have a read of the link below too, I read this a few years back and thought nothing of it. But it is an example of it already having been done during a Latter-day Saint general conference a few years back.

http://tech.lds.org/blog/15-twitter-and-lds-general-conference

Was It Freud Wiki article detailing some of the evidence for these ideas

Is Psychological Research Empirical?

Empiricism can be defined as “the doctrine that all knowledge is derived from sense experience.” [1] In other words, it is research that comes through our observations. In a very simple sense, I drop my pen, and I can observe that it falls downwards. This is the empirical evidence that supports the law of gravity. In the Publication Manual of the American Psychological Association, an empirical study is referred to as a report of “original research” (p. 10) Empirical research is important, because it can be verified. Sensual observations can be measured, and tangible readings can be taken.

In psychology, whether our research is empirical or not is controversial. As we seek to form a scientific study, we look to gather empirical data. Often, our data comes from Introspection, where a subject describes their feelings. Immanuel Kant (1724-1804) argued that psychological research could not be considered empirical, because “mental events cannot be quantified” (Fuchs & Milar, 2002). He suggests that these mental events cannot be analysed either in the laboratory, or using mathematical analysis. As these thoughts and feelings are verbally conveyed, and then interpreted by another, meaning can change or become lost, resulting in a game of psychological Chinese whispers.

Kant suggested instead that we should use physical observations, things which can be measured. Indeed not all data is gathered by introspection, and in recent years, technological advances have allowed more and more alternative methods for gathering empirical data.

When studying the brain and the nervous system, extensive methods and tools are now available to monitor activity within these areas. The process of “Single Cell Recording”, shows us how different specialised cells are in place to detect different types of image. This has given us valuable insights on how vision works. (Gleitman, Gross & Riesberg, 2011, p.105) These processes do not use introspection, and deliver more solid results that we can work with.

While these new advances in technology often allow new and more accurate methods of empirical study, I do believe most of our research still involves a form of introspection.

A study on facial emotional expressions revealed that some basic emotional expressions are found across different cultures. (Ekman & Friesen, 1975) In this study, participants were shown faces and asked to categorise each face, as to what type of emotion it was displaying. Participants were given six different emotions to choose from. Based upon this research, a further study was carried out more recently, which used the same method of asking participants to categorise facial emotions, however this time, eye-movement was tracked using modern equipment. (Corden, B.; Chilvers, R. & Skuse, D. 2008) This study found that people’s eyes avoided looking at “emotionally arousing” stimuli, such as “fearful and sad expressions”.

During these experiments, introspection was used, in conjunction with modern technology, in order to assess participant’s perceptions, before further studies and conclusions could be made.

So, are psychological studies empirical? The introspective data is not entirely tangible, it is opinion based. The same stimulus could be described or categorised differently by different people. At the same time, introspection is still a very useful way to gather data. In my view, the question of validity plays a role. Does it measure what it claims to measure? I believe that generally it does. I admit that that will result in the results having a weaker foundation, but in most cases they are sufficiently valid to draw valuable conclusions from.

References

[1] Dictionary.com

Alfred H. Fuchs & Katherine S. Milar (2002). Psychology as a Science

Gleitman, Gross & Riesberg (2011). Psychology 8th Edition.

Ekman, Paul & Friesen, Wallace V. (1975). Unmasking the face: A guide to recognizing emotions from facial clues.

Corden, B.; Chilvers, R. & Skuse, D. (2008) Avoidance of emotionally arousing stimuli predicts social–perceptual impairment in Asperger’s syndrome.

Why is Reliability Important?

Last week I read an interesting post on validity, and this week I am going to talk about its brother, reliability. So, why is reliability important? It’s easy when learning about scientific research and writing to feel like we’re given a whole set of rules and standards to follow and stick to, extra bits to think about and boxes to tick, when what we really want to be doing is researching psychology.

Ask Joe Bloggs on the street what he thinks about reliability, and he’ll probably say something like: “Because if your results aren’t reliable then they’ll be wrong”. And he’s right. (Do you not agree?) But are these methods helping us? Or are they a psychological form of ‘politically correct gone too far’?

Reliability and validity come together hand in hand to ensure the results of an experiment are trustworthy, realistic and correctly obtained. Validity is defined as the “extent to which a measure assesses what it is claimed to measure” (p. 261 Howitt, D & Cramer, D 2008), whereas reliability concerns consistency across different times or circumstances. An experiment could produce results which may be valid, and therefore are correctly measured, and could help us draw a conclusion, however they might not be reliable.

Reliability tells us that if one week, an experiment produces results to support hypothesis A, and that then in another experiment, either with a different sample, or a similar (if not the same) sample at a different time hypothesis A is then proven wrong, the results aren’t very reliable, and therefore there is insufficient evidence to draw a conclusion.

Reliability in psychology is often measured using statistical methods. ‘Internal reliability’ refers to how well each data value on a scale measures the concept in question. If the data is reliable, then theoretically any data value used will give the same as any other value, or indeed, all values together. Methods are used, such as ‘split half reliability’, where the first and second halves of results are separated, and then the Pearson correlation for these results is calculated. Other mathematical functions such as ‘Spearman-Brown formula’ and ‘Guttman reliability’ are also used.

More practically, tests can be repeated, either as a simple repeat (‘Test-retest reliability’) or in a different form (‘Alternate form reliability’) however this in turn can adversely affect the results, since the circumstances of participants may change, or memories of the first test can affect how participants handle the second test. Alternate forms reliability attempts to overcome the latter problem, by using a slightly different test, which resolves the issue to some extent.

Internal reliability still works hand in hand with these practical methods of ensuring reliability, e.g. after a repeat test, if we see that the value calculated for internal reliability are different from that of the original test, we can determine that results may not be reliable.

While statistics seem so lifeless, dull and uninteresting, we can see here how mathematical formulas can compensate where practical work falls short, but also vice-versa. Obviously, results must be reliable, and here we have a selection of methods that, when used in conjunction with our scientific judgement can and will help us ensure both validity and reliability of our research.