top of page

Science in the Spotlight: Tara Spires-Jones on Publishing, Public Engagement, and Policy

When it comes to advancing our understanding of the brain, few scientists have a résumé as varied and impactful as Professor Tara Spires-Jones. As the UK Dementia Research Institute’s Associate Director at the University of Edinburgh and Editor-in-Chief of the open access journal Brain Communications, Tara’s work spans the molecular mysteries of neurodegeneration to the frontlines of scientific publishing. Beyond the lab, she’s an advocate for rigorous, transparent science and a vocal supporter of public engagement, regularly stepping out of her comfort zone to share insights on everything from dementia research to the ethics of peer review. In this piece, Tara reflects on the challenges of science communication, the evolving landscape of academic publishing, and why she believes collaboration and openness are essential for progress in neuroscience.



What sort of opportunities have you had to communicate your science to the public, and what challenges have you faced during this?


I love a bit of public communication. I didn't think I would, but the BNA has helped with that. One of the other things they've been trying to do is help make BNA the trusted voice of neuroscience in the UK - the go-to place if you want to know from a public perspective about neuroscience. I got into this because one of my mentors here in Edinburgh, Richard Morris, recommended me to the Science Media Centre. The Science Media Centre curates quotes from experts, who are actively researching in the fields, on published papers that are getting picked up by journalists. Some very cool work will be published, for example an epidemiological study showing an association between hearing loss and Alzheimer's disease. They know that it is going to be big news, so they ask for opinions. It takes me about half an hour to read the paper and come up with a very simple quote on whether it is a good study, what the sample size is, whether you can conclude there is a causative relationship. It's usually pretty simple. I've found that to be really time efficient because it's only half an hour a week and it's a paper I would have read anyway. It's also very widely accessed because these quotes go into the BBC and into the newspapers. I've had lots of fun being on the radio and occasionally on TV. I don't love cameras as much, but the radio is fine now - I'm comfortable with it. 


At first it was very uncomfortable to comment on other people's work, but I'm never trying to say it's terrible, even if it's not great. I point out that there are serious limitations, but I’m never trying to talk down other scientists. The risk is that I'm always worried I'm going to say something stupid or wrong and have it go out to thousands of people or, for example, one time I forgot to put a disclosure statement addressing a potential conflict of interest with the study. I've never had a direct conflict of interest, but I work with industry. I had visited one of the big pharma campuses and helped them develop an educational video about how the brain changes in Alzheimer's disease which they paid me to do and I'd forgotten to write that in the disclosures. That was the company whose drug I was commenting on - I hadn’t been very nice about the drug as it had serious problems, but the statement still should have been there. I was contacted by a Guardian journalist saying that they were going to write a big exposé. And I was thinking, “Oh no, I am in so much trouble”. I had to write and apologise to everybody, including our University Press people and say that I’d screwed up because I’d forgotten to add the statement. So, there is a risk when you put your face out there. 



Thinking about commenting on other people's work and peer review in general - what's your opinion on anonymous peer review? 


It's tough, because now I'm an Editor in Chief of a journal, I've seen it from both sides. On the one hand, the idea of anonymisation and having a double-blind peer review (where the reviewers don't know the authors’ names, and the authors don’t know the reviewers’ names) seems like a good idea because there are documented biases against, for example, women and people from certain countries. It is probably unconscious bias perception most of the time, where you are harder on people with a female name or with a foreign name without realising, but not always. It sounds like a good idea, but in practice it's almost impossible to be blind because you know your field, so if you really cared, you could figure it out. 


“I think double blinding hurts your peer review acceptance... you wait forever for us to find a reviewer for your paper.”

The other thing is that it's also one of the reasons people say yes to reviewing a paper - because they're excited to review something by certain people. It's not just about the title or the science. I think double blinding hurts your peer review acceptance which, as an editor, is huge. But, also for authors, because you wait forever for us to find a reviewer for your paper. I'm on the fence about it.


The other option is to be totally open - the reviewers are known, and the authors are known. In my journal, that's what we offer. If everybody agrees, we'll publish the reviews with names if they want them. And when I do a review, if I remember, I sign it because I think people are more kind in how they phrase things if it's known who they are. The downside to that, and why I don't enforce it at the journal, is because people who are earlier in their career might feel frightened or unhappy about criticising somebody who's very powerful in the field, who's probably on the grant panel, etc. I don't have hugely strong feelings about either direction. I know there are imperfections in the peer review system because we're human, but I think it's still so important that we peer review each other's work to help find mistakes and to make sure that things are solid. It's a tough one.



What is your opinion on the use of impact factor and what is your advice for people selecting the correct journal for their work? 


“With my journal, we don't focus on impact factor at all… we have a huge focus on rigour and reproducibility.”

With my journal, we don't focus on impact factor at all. That was one of the guiding principles when I said yes to it - they said you can do whatever you want. I think it's more important that we have a huge focus on rigour and reproducibility, so we have a team of neuroscientists who check the statistics on every paper and make sure that you're doing at least nothing obviously wrong.


I think impact factor is a difficult one because, for career progression, the system we still live in is that people will see the journal and if it's a really high profile journal, they'll think, “I'm impressed”, right now, at least where I am. I think it's probably still universal; your grant applications and your promotions and your job applications rely on it, even though there are a lot of institutions that have signed up to the Declaration on Research (DORA) - they have agreed that we shouldn't focus only on impact factor, but it's still difficult to get rid of the fact that you know how hard it is to get into one of these glamorous journals in the back of your mind. 


For example, I'm on the MRC Neuroscience Mental Health grant panel - I got an email yesterday telling me that I had seven grants to assess in 10 days. And I was thinking, “you haven't seen my calendar…do you want me to actually read these seven grants?” I can probably read those seven grants in the next 10 days, but what I can't do is go back and read all the papers that they've ever written. If they say that they wrote this paper, and if I can see that it's in a respectable journal, I'm much more confident than if I see it in a journal that I've never heard of, or that has had problems in the past with publishing crappy data.

We need a way, a proxy, for knowing that data are solid and it's easier to believe something that's in a peer-reviewed journal than a preprint server, such as BioRxiv, because I don't have time to go and read the article on the preprint server. If you imagine that I have seven grants to read, there are ten papers in each of those that I need to read. I have a day job too. That's not even remotely possible. So, I think that's why people still use impact factor, but I don’t believe that it should be the only thing people are using. 


We have to have something like impact factor because there's no time to go back and individually judge every project or every paper, every person on everything they've ever written – it’s not practically possible, so we've got to have either new metrics or we have to just accept that. That's one imperfect metric that we know is there.



Yeah, it's so ingrained, isn't it? It's hard to overturn it, but I guess the best thing is to have it in conjunction with other metrics.


Yeah, I like to look how much our papers have been cited and that's not perfect either, but it makes me happy to say, “well, people have read it, and they've respected it enough to use it in their paper”. That's one that I prefer. When I write my CVs, I add that Google Scholar says we've been cited X amount of times. That’s a metric I think is more meaningful than the impact factor of the journal, because at least it's more paper specific. Although, if you only published it a month ago, of course it hasn't been cited. So, it's not perfect either.



You also mentioned rigour and reproducibility when looking at papers that are going through peer review or going to a journal. How do you ensure you know that all the people in your lab are doing rigorous and reproducible work? 


We talk about it first of all. Every January, when all the new students start, I give a lab meeting and go back over the lab ethos. One of the big things that we talk about is, if we want it to be ‘right’, it doesn't have to be ‘fast’, it doesn't have to be what you expected, but we do want to be sure that what we do is as accurate as we can make it. We talk about how we design experiments; we have an online lab notebook and everything has to go in before the experiment and that's easy for me to check. And then, if you change the plan for whatever reason, you just have to update it.


One of the biggest things is that we share our data when we publish, which I think is a real motivating factor. It's good for the scientist because you know that there's going to be the possibility that somebody else will look at it. I look at all the statistics and I rerun them in my own coding software (R) before we publish - if we're going to share the statistics files, I want to be sure they work, and it also helps me to consider and understand the data better. We talk about what statistical models people are using when they analyse their data every week at our catch ups.


The electronic lab book is amazing, I can't recommend it highly enough. We have a server where all the primary data goes, and it's really organised - we know exactly where any given image should be and that really helps us keep it straight because mistakes can happen after all. But we're trying really hard to not make mistakes and also make it easy for other people to follow, particularly co-authors on papers - that's what co-authors should do,  make sure it's right. We have multiple people go over it, blinding everything when we can.



Does Edinburgh pay for the electronic lab book, or do you have to fund that yourself for your lab?


“We need proper data managers to curate our data…but it's really tough to find funding for technical support like that.”

Edinburgh does have a data store, which we're required to use because we have human data which we can't put on anything that's not GDPR compliant. Edinburgh has the facility, we pay for the storage space, but the UK Dementia Research Institute pays for the electronic lab notebook. We could pay for it out of our budgets if we didn't have it paid for. What's not paid for by anybody, which I think would massively help rigour and reproducibility across the board, is data managers. We need proper data managers to curate our data so that the data that we share for a paper isn’t just dumped into a folder and we say, “here's all the images from this figure, and here's all the images from that figure”. We don’t have the time to properly curate it ourselves, and that's because we don't have the salary to support a proper data manager. I think that would be amazing if we could, as a field, have value for data managers that are funded through the normal funding mechanisms, either universities or being able to put them on your grant more easily. But, it's really tough to find funding for technical support like that.



That's one thing that industry has over academia - they often have people assigned to those roles which enables them to process data a lot faster and utilise peoples’ skills more efficiently.

How do you feel about Open Access journals? 


There's a bit of a crisis in the publishing field at the minute. Some of it's due to the publisher, because they have huge profit margins, especially for high impact journals. But there's also a push for Open Access, which is great for readers but not great for publishing scientists. You can imagine the inequities across parts of the world that don't have as much research funding, and I don't think there are good solutions for this yet.


At our journal, we have Open Access but there are processing fees. We waive them for people from low- and middle-income countries and, as an editor, I have discretion to waive fees on a proportion of papers. But as a field, we're struggling to come to the right model: “Who's going to pay for this?”. It comes back to this idea that funding is so limited. We would do a lot more science and have a lot less problems if we had a lot more money, but we don't. It's an imperfect system.



That affects temporal publishing as well, because some people wait to get an even bigger ‘story’, with more data and more figures, to publish in one go rather than doing an update that might be useful for other people in the field at that time.


Yeah, I think that preprint servers have helped with that. We have been putting things on preprint servers that were part of a story that somebody wanted to use and needed a DOI to cite it in their work. And then we in whacked it into a bigger peer review paper later. Preprint servers act as a ‘waiting for the big story’ server for labs that are willing to use them. But, a lot of people don't want to put their data up for publication because they're worried about other people scooping. I personally think that if someone has that kind of time, I'd love to have them do the experiment. I'm really happy when people get similar results to us - I like to see the replication, rather than worry about it.



And what about negative data? People say that negative data is just as important as positive data, but sometimes it can be difficult to publish it. That can be because it's very difficult to confirm a negative, but what's your opinion on publishing negative data and how can people try to push that more with journals?


Yes, it is tough to prove a negative, but it is important to try, and we do publish it. We had a paper last year where we had been trying to get tau to spread in fruit flies because we wanted to screen things. So, we put tau in neurons in the fruit fly and tried to make it spread trans-synaptically. We had managed to do this in mice, in human tissue, in human neurons and in all sorts of systems. But in flies, we just couldn't do it. We spent far too much time, far too much money and far too many fly lines. But the post-doc, James Catterson, and I had to call an end to it and say this just does not happen in flies. We'd tried promoting, we'd tried overactivating, we'd tried adding amyloid beta, we’d tried so many things. We published it in my journal, because we welcomed negative results. And the funny thing was, as soon as we published it, we had all these people on Twitter (X) and by email saying, “oh, yeah, we couldn't make it spread in flies either”. We were thinking, “That was five years of work. You could have said that!”.



Crazy! How useful do you think platforms like Twitter (X) are for scientists? It's been quite a science-centric platform for scientists to share, chat and debate different things. How useful do you think it is as a platform for talking about science?


I used to really like Twitter (X). I wasn't much of a social media person, but one of my first post-docs here suggested I join. I liked that you could find papers and have these sorts of conversations like, “Oh, yeah, we couldn’t make it spread either”. But then just recently with Elon Musk, it's become so awful that I've left and gone over to BlueSky and that's fine. I like being able to just scroll through scientists’ feeds and see what they're working on. I think that's fun, but it can be quite negative. It's still not a replacement for going to conferences where I find you get a lot more.


One of my favourite applications of social media is watching it update live while I'm not at a conference. I can watch what people are saying and then look them up - that I really enjoy. I appreciate people who live update on conferences!



Going back to communicating science, I noticed on your Edinburgh staff profile that you've talked about having passion for communicating science with policy makers. Have you been involved in any policy changes or even just communicating to those who make policy changes?


Yes, I was on the Scottish Science Advisory Council for a few years. And in that capacity, I did write policy papers that were sent to MSPs (Members of the Scottish Parliament) - I don't know if they read them. One of them was on open data and that felt like important and useful work. At the same time, like I said, I didn't see any policy changes or enactments based on that. It was a bit like shouting into the void. I think it's much more powerful what we do as charities - the British Neuroscience Association and CASE. It's important. That's their remit as a charity that tries to influence science policy. For example, there were rumours about the last governmental budget, before the new chancellor, that they were going to cut science funding. The BNA, and lots of other charities, put statements out and posted them on our websites and on our social media channels, sending them to relevant contacts explaining why it would not be good to cut funding. In the end, there wasn't a decrease in science funding in that budget. I'm not saying that was due to the BNA only, but the whole sector came together. I think that you can have that kind of impact if you work together. 


The other thing we do at the BNA is we go and speak in-person at Westminster, at the Scottish Parliament and at the Senedd in Wales. We're also trying to organise something in Northern Ireland. I've spoken, unfortunately it was during COVID, so it was online. People are quite interested in talking about science that's happening in your region and why it's important. I think that kind of thing is really good because it brings it to the attention of policymakers and informs them that it is something they should vote for when it comes up. I really think it's important work. It's hard to do on an individual level, but when you're with a group, I think it's really good.



Yeah, because I think it's important that more scientists get involved in policy change, especially because a lot of them do have strong opinions about how things are being done. But I guess through a charity, or an organisational body, is probably the best way to do it.

You've talked about being given patient samples and having collaborations with other labs to make use of their skills and equipment. Sometimes it can be quite difficult to know where to look for collaborations. How do you feel that collaborations have benefited your research?


Yeah, collaborations are amazing and that's how scientists work best, I think, working together and sharing. But it is tough to manage, especially when you're early on in your career and you're just starting a lab, for example, where do you put your energy? You also don't always want to be a middle author on a paper because, unfortunately, we're more rewarded for first or last authorships. It is a challenge, but I think that it's also one of the best things about being a scientist; going to conferences, listening to people's talks, talking to them, and if you get really excited about what somebody's doing, it gives you good ideas. That's when I always set up a call and say, “Let’s just talk about it. We’ll probably think of something”. And then we find ways to do things together, and it's usually an enthusiastic, smart early career researcher who wants to bridge the gap. And we'll work across labs and that works well for us.



Definitely! Does your research incorporate any patient involvement?


Not much directly. I work with Alzheimer's Scotland, who are a local charity who help people living with different types of dementia, and I really enjoy that. We have tours of the lab for people living with dementia and people who are caring for them. As fundamental scientists, we find it beneficial to understand the disease better and for motivation, because it really brings home how impactful the disease is. They're great at talking about ideas and formulating your science in a way that's accessible. It really does help us refine what we do. 


We do a little bit with living tissue donors, things like blood, but we don't talk to people routinely as part of our research. We find it more of an engagement activity or an involvement activity where we ask people what they think about what we're doing, what's important to them living with this condition - that's useful for us.


This interview was conducted by Rebecca Pope and edited by Rachel Grasmeder Allen, with graphics produced by Suzana Sultan. If you enjoyed this article, be the first to be notified about new posts by signing up to become a WiNUK member (top right of this page)! Interested in writing for WiNUK yourself? Contact us through the blog page and the editors will be in touch.


Comments


bottom of page