
TRT Podcast#129: How to spot high quality research: A conversation with Nate Hansford
How can a busy teacher spot high quality research? Nate Hansford breaks it all down for us in this informative episode!
Listen to the episode here
Full episode transcript
Hello! Anna Geiger here from The Measured Mom, and today, I'm sharing an interview that I had with Nathaniel Hansford. He's been teaching for eleven years and has written two books, "The Scientific Principles of Teaching" and "The Scientific Principles of Reading Instruction." I love the way he breaks down research and what it means and how to understand it for teachers like you and me. He talks about the different kinds of research, as well as why we don't always see a consensus where we think there might be.
We get into the weeds a little bit, but I think you'll be able to follow us pretty well. I also have some great resources for you in the show notes to help you understand more about how research works and how teachers can find the research that informs their teaching. So I hope you enjoy it, and let's get started!
Anna Geiger: Welcome, Nate!
Nathaniel Hansford: Hi, how are you doing?
Anna Geiger: Really good. Thanks so much for taking some time to talk with me today.
I have your book, "The Scientific Principles of Reading Instruction," and in there you really break down what research is and what teachers should look for. But as someone who, myself, has been trying to dive into the research, it is tricky. It's not something that everyday teachers are excited about because research articles are very dense and complicated.
So I was hoping that today you could talk to us about just some basic understandings that teachers should have, where to find the research, what to look for, and then we'll talk about what a quality study is, and things that you've learned in the research that you've done.
Can you introduce yourself first, tell us how you got into teaching, and when research became an interest of yours?
Nathaniel Hansford: Yeah. I am a teacher of eleven years, and I have my specialist in reading and special education. I am also the co-founder of the blog, Teaching by Science - Pedagogy Non Grata. And I am the author, as you mentioned, of two books, "The Scientific Principles of Teaching" and "The Scientific Principles of Reading Instruction."
I actually became interested in evidence-based instruction, as I would call it, or research into education about five years ago now. At the time, I was really interested in the science of fitness and I was spending a lot of time reading and researching about the science of fitness, just because I was really interested in personal fitness. I wasn't that person who was naturally fit, who was easily an athlete, so I just got really into this wormhole of research on that.
But I ended up at this school that was really marginalized. It was really affected by institutionalized racism. It was in the far north of Canada. It was an indigenous school. The students there, many of their parents or grandparents had been victims of residential schools. There was a very large achievement gap, and I wanted to help my students.
So I went back to get additional qualifications in education. Specifically, I wanted to get my qualifications in reading instruction and special education.
But as I was going through these qualifications, I noticed that my professors often made really strong claims without any research to support them, and I started trying to validate what they were saying or check what they were saying because it didn't sound right to me, it sounded wrong, for lack of a better word.
I had been just really interested in the science of fitness, and I had learned a little bit about how to do research from that, and I quickly started to notice there was this really big gap between what I had been told was the science of teaching and what the scientific research actually seemed to show.
I remember there were two points specifically that really jumped out at me at the start of my journey. One was teaching to learning styles. I remember, I was told in all of my classes, every single course, that the key was teaching to students' learning styles.That never jived well with me. It sounded too kitschy, for lack of a better word. There's these neat seven little learning styles and you can take a quiz you get off the internet and learn what learning style you are, and magically, the learning will be better. If you try and really break it down logically, it just doesn't seem to make sense.
For example, let's say I'm a musical learner. Should my math teacher then teach me math through entirely song? It just doesn't seem practical. Or how much creativity is my teacher going to be required to have to accomplish this? And how long is the lesson planning going to take? It just doesn't make any sense that we're supposed to neatly fit into these boxes. And then of course, everybody says they're a visual learner, and it didn't make sense.
So I started to dive into the research in that and I found, wow, the research seems to show the same thing that I was feeling, that the research was not really strongly supporting it. The way it was laid out by the original theorist and the way it had been applied in the education setting just wasn't logical and wasn't effective according to the scientific research.
The other claim that really set me off on this journey was this idea that we shouldn't start reading instruction too early. I remember, I had this textbook called "Developmentally Appropriate Practice," which is a really popular sub-movement within the balanced literacy approach. It's all about this idea that teachers should take more control over their instruction. But one of the ideas in it was that, if we teach students how to learn how to read too early, that it will cause cognitive damage, that students' brains will rewire in the wrong way and they won't be able to learn how to read properly later on.
I thought, that's a really specific strong claim. You're going to damage kids' brains if they learn how to read too young? There was no citation in my textbook for this, and I thought, how can we make a claim like that and not have a citation?
It actually just said, "research shows," which, by the way, you'll see that in a lot of papers and books, "Research shows X," and it's like my LEAST favorite phrase because anytime someone says "Research shows..." without any proper citation, it's just a huge red flag. It really just means someone thinks research shows because they haven't actually proved it if they haven't gone through the research on it.
I started challenging my professors a little on these and pushing back a little, and had a professor threaten to fail me if I didn't publicly agree with them that the key to learning instruction was teaching to learning styles.
I just wanted to explore this further, and I had a friend who's also really interested in science and a teacher, and I asked him if he wanted to co-start a podcast with me.
That has led us down this wormhole where now I've written over two hundred articles, and I've recorded hundreds of podcast episodes, and written a couple of books, and I have a handful of studies I've submitted for peer review.
Anna Geiger: Oh, cool. Wow.
Nathaniel Hansford: It's been a really weird, crazy, eccentric journey, I will admit.
Anna Geiger: You definitely stand out because most teachers are not interested in diving into all the research. I think it just feels very impossible really because it's so dense and tricky.
So let's maybe back up a little bit. Let's say a teacher reads something on a website and the person says, this is backed by research. I read a lot of people saying that multisensory teaching is backed by research, and I haven't seen any citations. So if I would say, "Could you give me some research? Could you link to some research that shows that's actually effective?" What should a teacher expect to see when they get to that research, when they get to that link?
Nathaniel Hansford: I think the standard that people sometimes think is that we just have a peer-reviewed citation. Unfortunately, peer-reviewed is a really low bar. I think we sometimes put peer review on this strange pedestal. It's supposed to be this gatekeeper thing. If you have a peer-reviewed study, it's truth. If you look, you'll find peer-reviewed studies contradicting each other all the time, and you can find a peer-reviewed study to support almost anything, including that the pyramids were built by aliens.
What you really need, I think, is you really need to look for a high quality study or a lot of studies. I think this is one of the hardest parts about referring teachers to research is, because studies sometimes conflict with each other or they show different data, you have to look at the whole picture. It's really not enough to look at one study. I don't think you really understand a topic unless you've read a lot of studies systematically on that topic.
One big mistake that I think people new to research will make often is they'll Google, trying to prove a certain point. They'll search "X pedagogy doesn't work studies" or "pedagogy works studies." And you really can't do that because you need to look at the research on both sides of the argument, to really properly understand it. That's a bad idea if you're not, because it's just too unsystematic. You're intentionally opening yourself up to bias in your results of your research.
In my opinion, the easiest thing teachers can do is look for meta-analyses. So meta-analyses are studies of studies. It's where the authors have systematically reviewed all the studies on a topic, and then they've done an analysis afterwards to see what was the result of those studies on average.
Using this type of approach is not failproof. It's not like you can always rely on every meta-analysis to produce perfect results. But it is a heck of a lot easier for teachers than to ask teachers, "Hey, I want you to go review all 500 studies on a topic or all 50 studies on a topic and understand them." Or, "Please go out and pick the best study on a topic."
Some people do advocate for that approach like, well, it's not so much looking at all the research, we just have to find the highest quality study, which sounds really good in principle, but that's really hard to do and it takes a lot of expertise. Even amongst researchers, they're often debating about how to determine quality.
As a general teacher, if you're new to research, I really think that's not a practical approach. I think the easiest solution for teachers if they want to do research is to start at least by looking at the meta-analyses. What do studies show on a topic?
And if you want to dive deeper into that topic and you want to look at, okay, well, how do different types of studies impact those results? Or what do different types of studies show more specifically for different subgroups?
That's a great question because research is contextual. Say you're looking at phonics, you would expect a different result for studies in phonics in kindergarten than you would in grade 12. You can't look at it all as if it's one catch-all approach. You have to look at those contexts. But I definitely think meta-analysis is a great place to start.
Another good thing to do is to look for blogs and researchers who summarize the research in a way that does this for you. I like to highlight Tim Shanahan because I think he's a phenomenal researcher for numerous reasons. He led one of the largest research studies ever done on reading. He's done many studies, and he has a phenomenal blog called Shanahan on Literacy. He breaks down probably almost every topic on reading instruction. Not only does he say what his research has shown, he breaks down those studies according to their quality and the number of studies that have shown that, and the actual results of those studies. So if you don't want to actually read the research yourself, finding people like Tim Shanahan is a great idea.
Actually on my website, I have a list of blogs where I feel the scholars who run those blogs actually do that, where they really break down the research, in a way that can be relied on. Because I think it's not enough to just say, "What does a peer-reviewed study show? Or can you link me to a peer-reviewed study?" You have to be able to say what type of study it was, and what were the results of those studies, and how might that change your interpretation? And to do that in a really valid way, you have to have looked at both sides of the argument. You can't have just tried to find the one study that supported your hypothesis.
Anna Geiger: Okay. So you have gotten us into the weeds already, so I'm going to back up a little bit because I'm thinking of people that maybe aren't even familiar exactly with what research looks like. So let's say someone asks for research. I think a lot of times what people get is a link to a blog post or something like that, and then they think that's supposed to be the research. So even an article from Tim Shanahan is not the research article.
Nathaniel Hansford: No.
Anna Geiger: But you can find the references at the bottom, which is one thing I like about his website too. So when they get to the research page, if someone's actually shared a research article, they should see things like the abstract, and what else would they expect to see?
Nathaniel Hansford: They should see an abstract, an introduction, a methodology section, results, discussion, limitations. Those are at least what I see to be the most common sections. In my opinion, the methodology section and the results section are the most important, because the methodology section tells you how the study was done, and the results section tells you what was the result of the study.
Anna Geiger: And also, if someone's looking for particular study, just a hint for people, I did not know about this for a long time, but just Googling the name of a study doesn't always get what you want. But if you go to googlescholar.com and then type in the name of the study, you can click on all versions, and a lot of times, there will be a free PDF. Other times, we just hit a paywall and then we can't actually read it. You can reach out to the author. I've tried that before and I've never heard back, but that's something you could do.
So talk to us about what peer review means. And then maybe tell us some more words like... You talked about meta-analysis already, but things like "statistically significant" and "effect size," words that people should be aware of when they're reading the conclusion, especially.
Nathaniel Hansford: Well, a peer review is really, it's a gatekeeper system. It's meant to make sure that studies that are published are legitimate and not just made-up nonsense. You have qualified scholars in the field who review papers submitted to a journal, and most of this is free. Oftentimes, you can submit your study completely free, and the people reviewing your work are doing it completely free. They're doing this sort of as a volunteer service as a commitment to science on both sides of this. Not to say that there aren't people who do do this for money because there are, but for the most part, I think most sides are doing this out of the commitment to science. it's meant to make sure that what you're getting is of high quality, at least in theory.
Now I will say, I have seen really low quality peer-reviewed studies, and I have found studies that I think are fairly high quality that are not peer-reviewed. Studies that are peer-reviewed tend to show higher results because those are the studies people are most proud of. So the studies people are less proud of, sometimes they're like, "Ah, it's not worth peer reviewing." But when they have a particularly high result, they definitely want to peer review that so they can get it out there and prove that it was an important paper.
In regards to what people should be looking for in a paper, I actually think the first thing they should be looking for is the type of study. I would say there are a couple of study designs that are particularly important for people to know. The two most common types of studies that you'll see in education, in my experience at least, are case studies and theoretical papers.
So theoretical papers are papers that break down the research on a topic, or previous research on a topic, but they're not a scientific experiment. This is important to note because the whole idea of science is built on this idea of testing hypotheses. You have an idea. You think it's correct. You want to prove it. You test it. And hopefully, if it's proven wrong, you admit your hypothesis is wrong and you move on. Or maybe you think, there was something wrong with the test, we've got to retest that again, and in a different way, we have to change the test. Or it proves you correct. So that's sort of the fundamental idea from science.
With the theoretical paper, you're more talking about why you think an idea might work. And that can touch on other fields of science, it can touch on previous papers, it can touch on linguistics, it can touch on neuroscience. But it's often outlining a theory for an idea that you should think should work. There are tons of theoretical papers in education.
The next most common is a case study. A case study is the cheapest and easiest type of study to do. So in a case study, you have a class of students, and at the beginning of your year, you measure their learning. At the end of the year, you measure their learning and you see how much they improved. And hopefully, you're basing this on the idea of implementing an idea or a pedagogy.
Case studies in general are problematic because we actually assume students should learn in a year. So it doesn't matter IF the students learn, it matters how much MORE they learned.
Now there are ways to analyze case studies, and if we have a lot of them, I don't think they're useless. I wouldn't say that they're useless, it's just that you should never make a definitive takeaway from a case study. It's sort of helping to build the validity of hypothesis. It's not PROVING a hypothesis is correct.
Then we get into actual experiments. The next most common paper you'll see is a quasi-experimental study. There are lots of different ways to do this, but typically you have one class that gets a treatment and one class that doesn't, and then you measure how much more did the treatment class learn than the class that did not. Sometimes actually, the treatment class learns less, and then we have negative results.
For an actual experimental study, you'll see the words RCT or a randomized controlled trial. This is often referred to as the gold standard of research. In this type of study, we randomly select students to either be in the control group or the treatment group.
This actually serves two purposes. One, it makes sure we didn't choose all the best students to go into our treatment group to bias the results. And two, it helps to make sure that the students are equivalent at the start. So if the randomization is done properly, we should have two fairly equivalent groups, and we won't have a one group being stronger than the other. We measure, again, which students learned more at the end of the study.
When we measure the results of this type of research, typically we use something called an effect size. An effect size is a standardized percentage. It's meant to account for both the difference in learning, or the difference between two interventions, and the variability. It's meant to be a metric we can use across studies. At least to some extent, we can standardize our measurements so that we know if study's result is good, negative, bad, moderate, weak, strong, because there's an interpretation guideline.
Generally speaking, any study that shows an effect size below 0.20 is a negligible result. That means that the learning difference was so small that we can't be sure that it was actually a result of the intervention, or if it was a result to basically random noise, because we expect there to be some level of random difference between the groups.
So we don't expect small deviations to be important. Generally speaking, effect sizes above 0.40 are seen as moderate, and effect sizes above 0.8 are considered high.
Now, that said, I think that might be an oversimplification because what we see is that the higher-quality studies tend to show lower results. And in general, education studies don't show the highest results. In reading science, it's pretty rare for a really well done study to show an effect size above 0.4.
Anna Geiger: Really? Okay.
Nathaniel Hansford: For me, I really use that as a benchmark in my head. When a study shows an effect size of 0.4, especially if it's multiple studies showing an average effect size of 0.4, I see that as really strong evidence that something is working.
If it's showing between 0.2 and 0.4, I'm open to saying that that's an interesting result, but it's not interesting if there's something else that we could use in that situation that shows a higher result.
For example, a lot of people are caught up in the reading wars debate between balanced literacy and structured literacy or science of reading approaches. And in general, we tend to see that heavily systematic phonics approaches like structured literacy or science of reading approaches, as people like to call them, show higher effect sizes of around that 0.4 mark. And we tend to see a little bit lower results in balanced literacy studies.
So you could say, "Well, the results of the balanced literacy studies are statistically significant and they're positive, but we have an alternative that shows better results."
For me, I would rather use the intervention that has shown better results in the literature.
This gets me to the last type of study that we commonly see, which I've talked about at the beginning of the episode, which is a meta study. A meta study tries to take the results of these quasi-experimental or RCT studies together, and average them out to show, what does the average study show?
In my experience, this is really important because we see a ton of variability in education research. And that makes sense, I mean, different teachers are going to be of different quality. You're going to have different results with different students in different settings. It's really tough to know. I would never look at, say, one classroom versus one classroom study as being valid on its own because it's just too small of a sample. How do we know that the other teacher just wasn't better?
And, generally speaking, studies almost always show a positive benefit actually, because people don't tend to publish studies with negative results. If a study is negative, people just don't go to publish, unfortunately. They should, and that's a problem, but they don't tend to publish those results. So we have to expect everything to show a positive result.
Sometimes you do see an on-average negative result for something in the literature, and that should be your biggest red flag of all time. If you're seeing a bunch of negative results in the studies, that means that even the people who are trying their hardest to prove it works couldn't do it, and some of those negative studies slip through the cracks. So that's a very big red flag for me when we see a bunch of negative effect sizes.
Anna Geiger: So we know that teachers need to see more than one study to be convinced that something is effective. And I know, I've talked before about convergence of evidence, how this needs to start. How have you perceived that to be developed among researchers? How does it work for them to finally all kind of agree on something? How long does that take? I know it's a really kind of a random question, and I know there's no a good answer, but-
Nathaniel Hansford: No, it's a great question! I think what you're asking about is the idea of the scientific consensus, which is like, when do most researchers agree on the topic?
When it comes to education, in my experience, the answer is never. There are always people who seem to get entrenched in specific camps who cannot leave their position for whatever reason. Maybe it's because they wrote their PhD thesis on it. Maybe it's because they read fifty books on the topic and they all said the same thing. Or maybe it's just because they so strongly believe in their heart that they're right, even when the evidence doesn't support them. And researchers are subject to the same biases as everybody else. People want to show tunnel vision, and they'll try to look for the evidence to support their thesis, regardless of what shows.
I think you do start to see more researchers agreeing on a perspective, once you have one of two things, I find. Either we start to see a couple of really, really high quality studies show very positive results, or show one thing works better than another thing, or we have a well conducted meta-analysis.
For example, there's some criticism of structured literacy, and I'm going to put in quotation marks, "science of reading approaches" or systematic phonics approaches, whatever you want to call it. There's some criticism of those. But most researchers will agree now that phonics is good. That's because we have, by at least my last count, and there's always more coming out, like fifteen different meta-analyses on phonics.
Anna Geiger: Oh, wow.
Nathaniel Hansford: Typically, they show an effect size around 0.40. I think personally, when we start to see that, that conversion of evidence, is once we have a high quality meta-analysis or some really, really high quality RCT's.
I will say that there's debate on even how to interpret science in education. I definitely fall into a camp that is more in favor of meta-analysis, but there's a whole camp of researchers out there who believe that if you have one really, really good RCT, that's all that matters. Then there's other camps of people who don't even believe in experimental research at all and think that all the statistical studies are just mumbo jumbo nonsense or math voodoo, for lack of a better term. You're never going to get a hundred percent consensus.
Anna Geiger: Boy, and that is really discouraging, I think, for teachers who are trying to figure this out because they just want an answer. I think about just the everyday teacher who's doing his or her best and really wants to make sure that they're doing right by the research, but they start to get into the weeds a little bit. Like they joined a big Facebook group and then they find out, oh, I thought we're all on the same side and now we can't even agree on how to teach phonemic awareness or whatever.
How would you encourage people who feel discouraged by the lack of consensus, even among people who agree that we should be teaching according to research?
Nathaniel Hansford: I actually think the lack of consensus, to some extent, is a good thing, because I'm not really a fan of taking a dogmatic approach. I'm not really a fan of saying, okay, this is the answer.
For example, there are a lot of different types of phonics approaches and a lot of different types of approaches to coding instruction. If you go into any social media group for the science of reading, you're going to find people pushing a print to speech approach or a speech to print approach. You're going to see people pushing an Orton-Gillingham approach. You're going to see people who are pushing pneumonic devices approaches. You're going to see some people pushing Dr. Pete Bower's approach of structured word literacy. And then you're going to see some people who want to take a little bit from everything.
Personally, I don't think there's any strong research that any of these approaches is superior to the others. I think there's advantages to each, but we have pretty strong research that both a print to speech and a speech to print approach work. There's no need to tell people they're doing something wrong, when we don't have research on something.
This is how we get more evidence. Because when people have different approaches, there's going to be more research done. We might find out, fifty years later from now, hey, we have more evidence about what works best.
Part of science is actually not forming the strongest opinion and speaking in these ideas of degrees of relative probability. Rather than saying, "This is fact," you should be more saying, "Well, according to my understanding of the best evidence available to me, this seems to be the most likely correct answer."
And being open to criticism and looking at other perspectives I think is a very healthy way of doing that. And I don't think there's a problem with people, looking at, say, an Orton-Gillingham approach and looking at a speech to print approach and trying to find, hey, what parts of that do I think would work best in my classroom, or could I most easily apply?
Anna Geiger: I think that's a really good explanation. And I think, what we need to do is just know, like Steve Dykstra talks about the big rocks, the bullseye science, what we know for sure, pretty much, I mean, we can't know anything a hundred percent in science, but we pretty much know it. And then we have some flexibility in all the particulars, and phonics would be a really good example of that.
Nathaniel Hansford: Actually, that was the main thesis I wanted to get by with my book on the science of reading, was that there are basically six things that I thought were especially well-proven on reading instruction that every classroom should have.
The first one is phonemic awareness instruction. The second one is phonics instruction. Third one is morphology instruction. The fourth one is vocabulary instruction. The fifth one would be comprehension instruction. And the sixth one would be fluency instruction.
We have really strong scientific evidence supporting the use of all of those types of approaches in a classroom. We don't need to have students necessarily getting an equal balance of those in each grade, but across all eight grades of elementary school, students should have a significant exposure to all of those types of instruction.
Anna Geiger: Thank you for sharing that, and I would like to recommend your book because you do walk through the specifics of research at the beginning in a very understandable way, and then each of the sections walks through the meta-analysis. For me, it takes a few times reading through because I'm not as familiar with all this language, but it's definitely very accessible to the average teacher.
Nathaniel Hansford: Well, thank you.
Anna Geiger: So for a teacher who says I want to be a person that stays on top of the research, I think the first thing I would probably suggest to them is locating people they can trust to translate the research for them as a starting point.
I just recently, after over ten years of online business, started using Twitter finally, because I was told that's where you actually get research. That's where people actually share that sort of thing. So I'm always looking for people to follow that are going to share new things. You could certainly find my account on Twitter, @measuredmom, and see who I follow to get some people to follow, and that's a starting point. But what would you suggest? Do you have specific names you'd like to share? You mentioned Timothy Shanahan.
Nathaniel Hansford: I tend to focus on the people who dive through, not just what does the research show, but HOW do we know the research shows that. I think it's really important to have that how piece, because if you don't, you don't really know if the person's telling you the truth. There are people I know that have great blogs that are excellent at saying what does the science show, but they don't go through that how piece. So I try to focus on that. Shanahan would be at the top of my list. I'd love to put my own blog on there, Teaching By Science - Pedagogy Non Grata.
Anna Geiger: Certainly. Yep.
Nathaniel Hansford: I think Parker Phonics is a great blog for going through reading research. Reading Rockets is a phenomenal website that actually has a plethora of very, very qualified reading researchers contribute. The Iowa Research Center is a reading research organization, and they do great work. Holly Lane is another person who has her own blog, who's just a phenomenal researcher and has just such a strong grasp of reading research. And-
Anna Geiger: I could just mention there that she's the author of UFLI phonics, right?
Nathaniel Hansford: That's right.
Anna Geiger: The program that a lot of people are really loving.
Nathaniel Hansford: Yeah. And Mark Seidenberg is another great researcher who I think puts out great content. And I don't even always agree with everything all of these people say. I think they do an excellent analysis, and I trust that this is genuinely their interpretation of the research and that they've actually gone through the research.
Anna Geiger: Just as kind of a recap, for a teacher who says, I want to start learning this, my recommendation would be to get on Twitter, but personally, I only follow people that talk about the science of reading because I don't want to get into any rabbit holes and ruin my day by being on Twitter. So I go in for ten minutes a day to just see what's there. A lot of times, I'll see new workshops or something I can sign up for, I just did that today, that I wouldn't have learned about otherwise.
I also would recommend being part of a big Facebook group. I know that that can be hard for some people, but I really do like the Science of Reading-What I Should Have Learned in College.
Nathaniel Hansford: Me too.
Anna Geiger: It is very big, and you're going to get all kinds of opinions, so I wouldn't base your beliefs on the comments that people make, but I use that group for all the free trainings that are recommended. And sometimes, if you ask a question about a study, someone might be able to share that with you.
Nathaniel Hansford: And can just add a comment to that. I just know that Donna has a whole editorial board for who decides what posts get through.
Anna Geiger: Yeah. They don't just post anything.
Nathaniel Hansford: No. I know I post quite regularly to there, and sometimes, I'll get rejected. They send me a really nice message and are like, "We rejected this article for this reason. We didn't like this part."
Anna Geiger: Oh, you had an article that you wanted to post and they didn't want to show?
Nathaniel Hansford: Yeah. Yeah.
Anna Geiger: Oh, interesting.
Nathaniel Hansford: I post usually a couple of articles a month to that group, and usually they get through, but every once in a while, they'll send me back a rejection editorial.
But that's a good thing, in my opinion. That makes me have more faith in them, that they're thoroughly vetting the content. I will get a really specific list of feedback if they have a concern or something. So to me, that increases my faith in their ability to that. And I happen to know that some of the researchers on that board of directors who decide what gets through are very phenomenal researchers, and they are very creditable in the field. Holly Lane is actually one of them.
Anna Geiger: Yeah. Yes. So that's very cool.
Finally, I think that teachers need a way, and this is a question for me because I'm not sure about this myself, how do I know if a study was just published on phonemic awareness? I'd like to know that, but do I just have to keep searching on Google Scholar every day? Are there alerts? How do you know besides just being active in these other places?
Nathaniel Hansford: Yeah. I mean, personally for me, I find that out through Twitter. I follow a lot of researchers, and when a new study comes out that's particularly important, it usually gets shared a lot on Twitter by those people.
It's actually very hard to stay up to date with research, especially each individual study. I'm sure there are thousands of studies that get published each month, so you can't expect yourself to keep up with every study. You can look for some of those big landmark studies, big RCTs, and the really big meta-analyses that come out. I think those are important to read, if you want to really stay up to date.
There is someone who has a great research review that she puts out once a month. It's Neena. I'm blanking on her last name, but maybe we could add it to the show notes.
Anna Geiger: Yeah. That'd be great. I'd like to know about that too. So it's like an email list that she send things out on? That would be great.
Nathaniel Hansford: And she has a free version and a premium version, and it just summarizes all the new studies that come out.
Anna Geiger: Oh, that's amazing.
Nathaniel Hansford: On education. But yeah, I think going on Twitter actually is a weirdly excellent place to get news on this. Who would've thought?
Anna Geiger: Yeah, very interesting. Very interesting. I know. That's great.
Well, thank you so much for taking time to talk with me. I hope you don't mind if I send you some emails now and then about a study to ask what you think about it.
We will link to all the things that you mentioned in the show notes today, and encourage people to check out your website, subscribe to your blog, and check out your books as well.
Can you tell us about any future projects that you're open to talking about publicly?
Nathaniel Hansford: Yeah. I have a couple of projects on the go right now. I'm really diving deep into the research on what percentage of students can learn how to read, and what can we do to get them there. So looking at studies that looked at individual factors like policy decision factors. I think that's fascinating.
I'm working with my partner on building a reading program that we're going to release. It's going to be free.
And I have lots and lots of meta-analyses on the go because I'm always working on meta-analyses, and I try to review every new big meta-analysis comes out on my blog too.
Anna Geiger: Awesome. Awesome. That's great. Okay, well, thank you so much. It was very nice to meet you.
Nathaniel Hansford: It was nice to meet you too.
Anna Geiger: Wow, we packed a lot into today's episode! We've got a lot waiting for you in the show notes, which you can find at themeasuredmom.com/episode129. And I'm also going to link to a workshop by Dr. Holly Lane, which really breaks down what research is, what research isn't, how to find it, how to use it, and I think it's a really good introduction to some of the deeper stuff that we talked about today.
Thanks so much for listening, and I'll talk to you again next time!
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.
Nate’s books and other resources
- The Scientific Principles of Reading Instruction
- The Scientific Principles of Teaching
- Nate’s blog, Pedagogy Non Grata
- Nate’s podcast, Pedagogy Non Grata
- Nate on Twitter
Nate’s recommended resources
- Shanahan on Literacy blog
- Mark Seidenberg blog
- Parker Phonics
- Reading Rockets
- Iowa Reading Research Center
- Neena Saha’s Reading Research Recap Newsletter
Also recommended
- Holly Lane’s workshop: Science or Snake Oil?
Leave a Comment