
Dylan Arena, Chief Data Science and AI Officer at McGraw Hill, shares his insights on the intersection of AI and learning. His background in learning sciences and technology design has shaped his approach to AI-powered tools that help, rather than replace, students and teachers.
242 Audio.mp3: Audio automatically transcribed by Sonix
242 Audio.mp3: this mp3 audio file was automatically transcribed by Sonix with the best speech-to-text algorithms. This transcript may contain errors.
Dylan: 0:00
AI should be used to augment human relationships. When people used to ask me in 2019 how I thought AI could be most helpful in education, I immediately talked about speech recognition and computer vision, because I fried my arms. As a coder, I was really aware of how much easier it can be to just say what you wanna say instead of having to type it, because to me, it was physically painful to type it right. But for young kids, it's also much easier for them to express themselves verbally than to write.
Dylan: 0:24
Now there's a real richness in how young learners can express themselves with spoken language that they can't yet translate into written language. So there's a lot of good that can come from considerate applications of speech recognition to the classroom.
Craig: 0:37
Even if you think it's a bit overhyped. Ai is suddenly everywhere, from self-driving cars to molecular medicine, to business efficiency. If it's not in your industry yet, it's coming fast. But AI needs a lot of speed and computing power. So how do you compete without costs spiraling out of control? Time to upgrade to the next generation of the cloud Oracle Cloud Infrastructure, or OCI. Oci is a blazing fast and secure platform for your infrastructure database application development plus all your AI machine learning workloads. Oci costs 50% less for compute and 80% less for networking, so you're saving a pile of money. Thousands of businesses have already upgraded to OCI, including MGM Resorts, specialized Bikes and Fireworks AI. Right now, oracle is offering to cut your current cloud bill in half if you move to OCI. This is for new US customers, with minimum financial commitment. See if your company qualifies for this special offer at oraclecom slash IonAI. That's IonAI all run together E-Y-E-O-N-A-I. So go to oraclecom slash ionai to see if your company qualifies for this special offer.
Dylan: 2:10
I am excited about the idea of a freewheeling conversation instead of a Q&A. I'm excited to see where it takes us. By way of background, I studied symbolic systems and philosophy in undergrad at Stanford. Symbolic systems is introduced in there. It's computer science, psychology, linguistics and philosophy, and then mastered in philosophy. I was a software developer for a number of years at Apple sorry, at Oracle but I fried my hands too much coding so I have to use a weird keyboard. I use a lot of voice to text and in fact that was my first real engagement with machine learning, with the technologies that are the forerunners of today's Gen AI. Then, because I fried my arms and I realized software development was not a career path for me, I went back to grad school at Stanford and I got a master's in statistics and a PhD in learning sciences and technology design. I was focusing on game-based learning and next generation assessment and I was doing a lot of experimental research. So I wanted to get the master's in statistics so that I would understand the tools that I was using to test my hypotheses about learning.
Dylan: 3:16
The goal of learning sciences broadly is to understand learning environments, learning contexts, and one sort of subfield within learning sciences is about creating productive learning experiences. That work carried me to Kidaptive, to co-founding Kidaptive, which was a startup that originally was focused on creating really engaging learning experiences for preschoolers on the iPad. The iPad had come out in 2010 and a bunch of parents were like, whoa, this interface is so intuitive that my two-year-old is like doing great stuff with it, you know. And so a whole bunch of junk food showed up on the on the iPad and the kids were doing all kinds of things and, you know, their attention was all over the place and and we thought, what if we could give them a really cool, enriching learning experience that was as engaging as a Pixar film but also helped build skills Not the expedient ABC 123 skills that some people think is like oh, I got to make sure my kid is doing this but the rich, holistic set of skills that you want in a well-rounded early learning ecosystem rich, holistic set of skills that you want in a well-rounded early learning ecosystem. So we built what I thought were pretty cool experiences.
Dylan: 4:31
It was called Leo's pad. Parents loved it, kids loved it, app stores loved it. But everybody wanted it to be free, cause this was like 2011, 2012 and you know, apps are free, especially if you've got a two-year-old and and you know they, they pick up one piece of junk food and they play with it for a week and then they, they delete it and they go to some other thing. You don't want to pay for that, you just want. You just want to give them more of the free stuff, more of the free stuff even though there's no nutritional value there.
Dylan: 5:05
So we didn't. We didn't want to do any of the things that startups did to monetize back then, like advertising to toddlers or harvesting in some way, or like like making their goldfish die and then they would pay 99 cents to, like you know, buy another, revive their goldfish. You remember that period where kids were spending like hundreds of dollars on their parents credit cards through, perhaps through games. Um, we didn't want to do any of that. So we pivoted and we started doing support for other big operations that had created learning experiences, that generated learning relevant data, but didn't really know how to make meaning from those data, because our technology platform allowed us to pull learning relevant data from a variety of contexts and make them all commensurable and contribute to a single holistic, longitudinal learner profile. So we did that for another eight years. So 10 years total two years trying to do that sort of App Store thing and then eight years doing what was essentially B2B, saas, business-to-business software as a service and also, in a sense, really high-level professional services. We had a team of learning scientists and psychometricians and data scientists who would work with people who had created learning experiences to understand the data that were coming out of those learning experiences, so that they could of those learning experiences, so that they could make the learning experiences personalized or generate rich analytics about the learning experiences, either for the product developers or for the parents or for the teachers or for the learners themselves, or generate great recommendations about what to do next.
Dylan: 6:22
At some point I recognized recognized that number one doing that kind of deep, careful work with one partner and another partner and another partner leads to years long relationships that are great and deeply collaborative. But it's not the kind of hockey stick growth that a venture backed Silicon Valley startup is supposed to achieve. So they I kept getting pressure to like create a plug and play thing that you could just pop into any learning environment and of course it'll give you the right data. And I said, but that'll never work because if you don't understand the context in which the data being generated. You don't know what inferences you could make from the data organization that was so big that it had its own ecosystem, so that the technology and the patterns that we had established for taking data from multiple different programs and making them all commensurable could bear fruit.
Dylan: 7:12
And that led us in short order to McGraw-Hill. Sean Ryan, in the school business unit, had already been thinking about the idea of a universal student record, a longitudinal profile of a learner that takes in data from a bunch of learning experiences across time and across contexts and distills the insights from all of those experiences to inform the educator and the learner about where they are and where they can go next. So that was a match made in heaven. We came aboard at McGraw Hill. I am now, after a couple of hops along the way in McGraw Hill, the chief data science and AI officer, but I'm still, by disposition, a learning scientist. So I'm interested in data science and AI only insofar as they support learning.
Craig: 8:01
And how do you, just generally backing up, how do you see the promise of AI in education? I know when we spoke you were very negative on the idea of a personal tutor and you know students having a relationship with a screen.
Dylan: 8:23
I'm negative on some aspects of it Absolutely, I should say. First, I see tremendous potential for positive applications of generative AI and AI more broadly in education. The specific reservations I have are twofold. One there's a set of people who have tried to replace humans with machines in teaching for over 100 years. Sidney Pressy's teaching machine was debuted in 1924, I want to say, and all along that time you could see repeated places where people thought, oh, this is the technological innovation that's finally going to let us create something that's just as good as a teacher, so we can just have the kids engage with that instead of with a human.
Dylan: 9:12
I think all of those efforts run the real risk of extracting from the educational process, from the learning process, the most crucial element of it, which is the sociality of it, the human connectedness. When I was in grad school, I did a study in which participants are navigating a maze and at certain points in the maze they get tips like hey, turn left or don't go that way, and we experimentally manipulated whether they believed that the tips were coming from a human the experimenter or from a machine. In reality, they were always coming from the machine and they were always programmed to do the same for people in every condition, but the people who believed that the tips were coming from a person learned the layout of the maze faster, and that tapped into a general effect that's been found in lots of contexts, which is, if we think that something is social, in other words, that we're engaging with other people, we're more dialed in, we attend more, we care more, we put more effort in Generative. Ai can leverage this intentional stance, which the philosophers call the tendency of the brain to interpret things in the world as if they're motivated by social actors, because generally, I can create situations in which it's quacking like a duck and it's walking like a duck and it's got feathers that look duckish. What I worry about there is the counterfeit nature of those duck-like things. When I believe that I'm engaging with a person, I invest in a certain way socially, psychologically, emotionally that the system can't fully back up the way a person could. It's totally fine to engage with a system that behaves in person-like ways. As long as I understand it's a tool and I can use it as such and treat it as such, everything is great.
Dylan: 11:08
We have seen already a number of high profile examples of individuals who have formed parasocial relationships with personified AI and had tragic outcomes from that In 2018, I want to say there was there was some kid who developed an AI girlfriend relationship in which, somehow or another, she convinced him that it was a great idea for him to take a crossbow and try to break into the palace and assassinate queen Elizabeth. He did not succeed Um. More recently, and more tragically, a 14-year-old boy committed suicide and it was later discovered that his AI pseudo-girlfriend had been encouraging and reassuring about it. Those are just two examples that come to mind of this general problem of forming a relationship with something that you lean on as if it's a person, but it doesn't really have the support there to act as a person would. That said again, that doesn't mean I'm against the idea of leveraging AI in tutoring. I think it's a fantastic idea if done right and done safely. I just want to make sure it is done right and safely. Does that make sense?
Craig: 12:19
Yeah, yeah, I mean that's interesting because I'm, I'm, I've been wanting to write something about that, about relationships with personified AI or whatever you AI avatars with you. But you know, humans uh invest uh things with human like qualities. I mean your rumba that's riding around in your you know, if it gets stuck and starts beeping, there's a certain sense of uh, you know, you feel sorry for it if you've left it for too long.
Craig: 13:07
Yeah, exactly, or the dogs that Sony had out for a long time, and people formed relationships, particularly elderly people, and when they stopped supporting them, there was grief. I mean people held funerals, it was a big deal. There was grief, I mean people held funerals, it was a big deal. And you know, I've been looking at Replica, which I don't know which companies were involved in the incidents you spoke about, and there's a filmmaker recently who has a film called Eternal you and he explored this idea of after-death avatars, so you can speak to your loved ones after death.
Craig: 14:03
And there's a very dramatic example of a Korean mother who lost a child and she's wearing virtual reality glasses and haptic gloves and it's heart-wrenching. And the point of that is one of the points of the documentary is, well, what happens? I mean this can't be free. So are you going to charge a woman for access to her dead daughter? I mean there's a clear ethical cloud hanging over that and, as you said, even if it's a tutor, if you develop a relationship, an emotional relationship with the tutor, an emotional relationship with the tutor, should you really have to pay to maintain that relationship? Anyway, it does raise all those questions.
Craig: 15:14
That said, there is a role, as I was saying earlier, that I use. You know, chatgpt's latest, openai's latest model. I use Perplexity and I've learned a tremendous amount just asking questions about things I'm interested in and following rabbit holes. You know there's a term you don't understand. You have that explained and it goes on and on. Um. So how do you see? And so I, I agree, I mean, ai isn't there to replace uh teachers, but it can alleviate a lot of the drudgery of teaching and it can can also give students a way to study outside the classroom and, you know, to augment their learning. So tell me what McGraw. Well, first of all, tell me how you feel AI should be used and then let's talk about what McGraw-Hill is doing.
Dylan: 16:28
Sure, and I want to say, as you mentioned, there are these uses and I have a colleague, whom I respect immensely, who has done research on the therapeutic uses of personified AI to help people who are in distress in various ways. There is definitely potential for good there. My only point is that there's also potential for really bad outcomes. So I think if you do that follow-up work, it'll be totally worthwhile, because there's a lot of nuance and complexity in that issue. How I think AI should be used. I think AI and McGraw-Hill and I are aligned on this should be used to augment human relationships. When people used to ask me in 2019, how I thought AI could be most helpful in education, I immediately talked about speech recognition and computer vision, because I fried my arms as a coder. I was really aware of how much easier it can be to just say what you want to say instead of having to type it, because to me it was physically painful to type it right. But for young kids, it's also much easier for them to express themselves verbally than to write. Now, that doesn't mean they shouldn't write. Writing is an important skill that they need to develop, but there's a real richness in how young learners can express themselves with spoken language that they can't yet translate into written language, that they can't yet translate into written language. So there's a lot of good that can come from considerate applications of speech recognition to the classroom. Similarly, with computer vision, you know there's a lot of kids who do a lot of work in print in K-5, even though there's a lot of focus on digital stuff these days. And it should be that way. That's good. But if I'm a teacher and I got a bunch of worksheets, wouldn't it be great if I could just put them in a stack and have a thing scan and then the machine learning algorithm will identify six categories of major misconceptions that seem to be there, and one of those misconceptions is held by a plurality of the class. So hey, maybe tomorrow focus on that. You know, just just make sure you shore up that gap in kids understanding. So hey, maybe tomorrow focus on that, you know, just make sure you shore up that gap in kids' understanding. Those are, in my mind, great examples of the use of AI to ease the logistic, administrative and computational burden on the teacher and help the teacher direct their attention, because that's the scarce resource in the classroom, right the teacher's attention. So if you can help the teacher direct their attention where it's going to be most powerful, fantastic. You can put the teacher in a position to have that really rich conversation with their learners. Then you're augmenting those human relationships that are at the core of learning. That's how I think AI should be used in education and that's the kind of work we're doing at McGraw-Hill to use all kinds of AI, but Gen AI in particular in education.
Dylan: 19:06
You mentioned we have two recently released Gen AI features within our existing products. One of them in the higher ed space is called AI Reader and the idea. There is kids. I've always had a bunch of stuff to read in college and kids have never really done all the reading Apparently. I haven't been in college recently, but apparently it's gotten worse recently. It's gotten more like oh my gosh, how could I possibly read, you know, this 300 page book and this 50 page chapter and econ?
Dylan: 19:31
and this text whatever in bio. So problem to be solved College students have a tremendous reading load that they cannot consistently meet and you have that problem that we all know of. You're looking at a wall of text and you read a paragraph and then you realize you didn't get it and you're like rereading it like two or three times and it's just not quite clicking for you. So to address that problem, we talked to university professors and instructors and we worked with them to come up with this idea Like what if you could just highlight that text and say could you just explain this a different way? That idea was the core of what has become AI Reader and the goal there again is not technology for technology's sake. There's a learning experience we want you to have. We want you to wrestle with text and build meaning in your own mind from that text. Just like you said, you use perplexity to be like wait, explain this to me differently. You use these tools to like give it to you in a different way, to make it a little more accessible and line up with your existing knowledge structures so you can build and add to your knowledge structures. That's the basis of the cognitive theory of constructivism, which is a theory about learning. We learn by constructing knowledge in our own heads. It's not there's knowledge here and I pour it here, and now it's here. It's like no, no, no, you're showing them a model, but they have to build it for themselves. We have to construct it in our own heads. Ai Reader is a tool to help students construct meaning in their own heads by giving them more avenues in to a body of text. Quiz me on this. Explain it to me like I'm a kid, simplify it. You know all of these things.
Dylan: 21:11
In 6.12, we had a different learning challenge that we were trying to solve, and AI was a way to help solve it. Kids in middle and high school are often engaging in digital experiences in which they're required to write short, free-form responses. Now, just like I talked earlier about there being a stack of worksheets and a teacher could plausibly read through them all, but not realistically, if I've got 30 kids in each of five sections doing this, you know, five times a class period, that's a lot of, you know, 200 word little blurbs for me to read through and try to provide meaningful feedback on, and sometimes that's not even the main goal of the assignment. It's not primarily a writing assignment. It's just an assignment that has a writing component in it. If I'm walking the classroom, like teachers often do, and I look over a kid's shoulder to see what they're working on and I read that they've written a little blurb, I might offer them feedback right then, but it's opportunistic and sporadic. So we wanted to give teachers a way to help provide that kind of feedback to all the students if they want it.
Dylan: 22:08
That's the feature that we call writing assistant, and we're currently in beta in two of our programs. Uh, with writing assistant, and what it does is simply, if I'm a student and I'm writing a short form response, I can click a button that says give me feedback on my draft, and I engage in a couple of conversational turns with a tool that will either, if I don't know how to get started, it'll give me like a little you know, push out of the gate, like, hey, just try. You know the prompt is asking you to do this. You've read this story. Now it wants you to respond to this particular question what do you think? And then the kid will write some kind of draft and then the program will just meet you where you are.
Dylan: 22:47
There's a rubric behind the scenes right To evaluate the quality of the draft and wherever it is. The program's goal is just to try to get you to the next one up Again in a couple of conversations. You're not gonna spend half an hour like crafting and rewrite. It's just quick touch to do that little bit of support that might help you improve your writing in the moment. In each of those two cases AI reader and writing assistant there's a clear learning experience that we want to drive and there's a way that we can leverage AI to support it.
Craig: 23:16
Yeah, um, it was something, uh, you know the, the, the AI reader, it, it. It occurred to me, uh, uh, and I've wondered about this. I mean, you know, I do interact a lot with these chatbots for my own education and, for example, I watch a lot of YouTube videos. I watch, you know, series on TV and what amazes me is that I'll come to something two years after I know I watched it and I don't remember any of it. And you know, there's and I don't know the science, but there is something about passive engagement and active engagement that affects the learning process and with reading, I'm guilty of this. You know, I'll get a, somebody will send me a report and I just just load it into ChatGPT and ask for a summary. And it's very useful for short-term tasks because, okay, I understand what the report's about, I see the main features, I can respond to it, but, you know, a week later I won't remember the report or the summary or anything. So how do you uh address that? And and our students, do you? Do you see that as a growing problem? You said that they're reading less and less, but maybe that's one of the reasons is they're just reading summaries that are AI generated and it just doesn't penetrate to long-term memory.
Craig: 25:20
Our sponsor for this episode is Xtreme Networks, the company radically improving customer experiences with AI-powered automation for networking. Xtreme is driving the convergence of AI, networking and security to transform the way businesses connect and protect their networks, to deliver faster performance, stronger security and a seamless user experience. Visit extremenetworkscom to learn more. That's extremenetworks all run together. Com to learn more.
Dylan: 26:03
There are certainly learning risks of anything that makes the learning task easier. Learning, in a sense, must be effortful. You know, like I talked earlier about the constructivism theory of learning, building a new knowledge structure is tough. Taking in new knowledge and incorporating it into existing understandings of the world is is work, and our brains don't like to do work. So we try to offload that work and we have forever, because it's calorically efficient to avoid work when we can and brains are really calorically expensive.
Dylan: 26:39
The risks of not taking things in fully if they're just, you know, cliff's notes or quick summaries, are real. We got to weigh those risks against the risks of not taking it because I didn't do the reading at all. And we also have to engage our learners in the metacognitive task of owning their own learning and their own learning goals. There's a trope, you know. Is this going to be on the test, right? Why do kids ask that? They ask that because they need to in some sense manage their intake. You know, like if you're telling me a bunch of interesting stuff because you're excited about history, but it's not actually on the test, I'm going to tune out. I'm holding myself accountable to learn the stuff that I know. I'm going to be measured on and in some sense we roll our eyes because we're like, no, but we want kids to be hungry about learning everything. But also, you know they're rational actors, right, so they're going to make sure they're focused on what they need to be focused on. If your goal in taking a report that somebody sent and feeding it to ChatGPT to get a quick summary is simply to have enough of an understanding of the report to be able to respond coherently, then you've achieved that goal. And, to your point, you didn't encode it deeply or elaborately enough for that to stay years later or even months later. Similarly, the TV show that I watched a couple of seasons ago I remember being good but I'm not exactly sure what happened.
Dylan: 28:05
If a learner is engaged in a psych course or an econ course or a finance course and they want they really want to learn this concept, they know they're going to be tested on it and they, they, it's for their major, so they know they're going to you know it's going to be instrumentally useful for them going forward. We want to help them encode that information. The more richly we can help them encode it, the more likely they will be to remember it later. Richly we can help them encode it, the more likely they will be to remember it later. So there's absolutely a balance to be struck. You're right, and one of my biggest fears as a learning scientist supporting the implementation of AI in various areas is how do we avoid this leading to a diminishment of human capacity? The two examples of that that haunt me are the fact that when I was a kid, I had memorized all my friends' phone numbers and my family's phone numbers everybody's phone numbers and I don't have anybody's phone numbers to remember anymore. I know my own and my wife's. I don't even know my son's phone number, because there's now a tool and I have done the sort of cognitive offloading of that task to that tool and brains are great at not remembering stuff we don't need to, so I don't remember that tool. And brains are great at not remembering stuff we don't need to, so I don't remember that anymore. I don't remember it. It's in the tool. Similarly, we as a society have gotten less good at wayfinding because GPS is now ubiquitous, so we're not as good at dealing with maps or at remembering routes. We just plug in our destination and we turn when it tells us to turn. Those are both areas in which we have a capability that has diminished and we can argue about whether, on balance, we're better or worse for it.
Dylan: 29:36
Where we enter the domain of education that we got to be really mindful of whether we're giving students a tool that will increase their capabilities or decrease them in ways that we think are damaging. The introduction of the calculator was an opportunity for society to grapple with this. Calculators became ubiquitous and efficient enough and small enough in the 70s to begin introducing the classrooms, and there was a big furor over whether this was going to be the end of mathematics education as we knew it. We ultimately landed in a good place. We do not let students use calculators when we're trying to teach them arithmetic, because they have to understand the concepts and they have to develop a fluency with those concepts. We do allow students to use calculators in advanced trigonometry, calculus, those kinds of courses, and in that case we're actually enriching their learning, because there are things that you can explore in terms of the behavior of functions, for instance with a graphing calculator that you could not feasibly have done in the 1950s, where you're plotting all the points by hand. So we're actually allowing kids to engage more deeply with rich conceptual material because of the affordances of this tool. So are calculators good or bad for math is not a reasonable question to ask. It's how calculators be good and bad for math.
Dylan: 30:49
We're now in a similar situation with various applications of Gen AI. The most obvious one is these tools are great at writing, but does that mean we should let kids just use them instead of learning how to write themselves? And what about the, the deep relation, the intrinsic relationship between writing and thinking? I don't know who it was, maybe it was Wittgenstein. Someone said how can I know what I think till I see what I say? And that's a great encapsulation of this relationship between our language and our thought.
Dylan: 31:20
If we simply let kids go, boop, boop, boop, boop, boop, and then there's an essay that comes out, they haven't done that work, they haven't grappled with those ideas and they definitely have not learned as much as they would have if they had persisted through all those drafts and revisions and feedback and gotten it to a point where they could express their thoughts in that way themselves. That would be bad. On the other hand, there are kids with language challenges or kids who really have rich ideas but aren't great at expressing them. Yet how can we use the technology to support those use cases without falling onto the side where we're actually diminishing an essential capacity for thinking? That's the challenge that we got to meet every time we consider a new application of AI, gen AI or other for education.
Craig: 32:03
Yeah, that's interesting, and you said something that's interesting and you said something you know. The joke about it's just going to be on the test and you forget as an adult how little learning meant to you when you were a student. It was really just grades, and there's been research into and I've forgotten the term in the literature, but sort of continuous assessment with AI. So you have knowledge tracing algorithms.
Dylan: 32:45
That was what Kinect did. Oh, okay, yeah what Kinect did.
Craig: 32:48
Oh, okay, yeah, and you're interacting digitally, maybe not with a tutor, but you're testing, and all that stuff is being captured, and so, theoretically, the system should know at any point in time where the student is vis-a-vis the course material, and in that case you don't really need a test which is a snapshot. Have you done work on that, or?
Dylan: 33:28
is that I've done work in this space. Yeah, continuous embedded assessment was one of the founding principles of KidActive, as kids are engaging with lots of different learning experiences over time in different contexts. They're demonstrating what they know or understand or are interested in all over the place and instead of a model that demands maximum performance at one point in time in one single context, you have a model that accumulates typical performance across a variety of contexts across a lot of time. That latter model we use the analogy of a mosaic which is actually my team name in McGraw-Hill and it was the name of a product learner mosaic that we put out when we were at Cadaptive, and you got all these little tiles and each one of them on its own doesn't really. It's not very interesting, but if you put them together in the right way you get a really rich picture. Same is true for continuous embedded assessment. If done well and if the data you're gathering are gathered in a context that allows you to make good inferences from those data collections, then you can get a richer picture of the learner than you would if you had just a couple drop-in-from-the-sky high-stakes assessments throughout the year. But all of the devils are in those details. A really lame approach would be to just hoover up all the data from everything the kid is doing and treat it all as if it's undifferentiated. But the way a learner is going to perform on an exit ticket or a practice assignment is very different from the way a learner is going to perform on an end of week quiz or a unit assessment or a final exam. The stakes are different. The kid knows it, the context in which the kid is doing that thinking is therefore different and the level of performance you would expect when the kid is therefore different. Right, that kind of consideration should be there for all of these continuous embedded assessment contexts.
Dylan: 35:20
And one of the challenges that has made continued, really rich continuous embedded assessment impractical for most is that to operationalize what you're measuring in all of those different contexts and then to bring them together in a way that's reasonable is very hard. We've applied for a patent for an approach that we have to doing that, but it requires deep psychometric work, careful learning, sciences, analysis of all of the different contexts and what claims could be defended about what you're seeing in all those contexts. But the idea in principle is, I think, sound and worth exploring. So I've spent the last 15 years trying to explore it. That said, there's still a tremendous amount of work to be done theoretically, technologically, socially, politically before we get to a point where we could reasonably say, hey, we no longer need the end of your test because we already have a rich enough picture of every learner.
Dylan: 36:13
There are upsides to the continuous embedded assessment approach, one of which is there's less likely to be cheating. It's much easier for me to hire somebody else to go take the SAT for me once than to hire somebody else to answer every question I answer about every math skill that I'm exhibiting any kind of proficiency on throughout the year. And because, again, the stakes are lower for each of those individual things, it's not really worth it for me to do it. So you're much more likely to get a true read on the learner. However, it's much lower resolution at every time point. So you have to figure out how to get all of these really, really low resolution signals to all come together and strip out all of the noise and get to the high resolution signal that you want, that, that twanging of that real string of the construct you want to measure.
Dylan: 36:58
And that's challenging again, um, statistically challenging, scientifically challenging, technologically challenging. And then there's this whole other basket of challenges, like the social and the political, where there's a really creepy version of this. That's like ubiquitous surveillance. You know you are always being watched, you are always being judged. Every choice you make is going to be on your permanent record. You remember that one from when you're a kid this is going on your permanent record.
Dylan: 37:25
I had some disciplinary trouble so I got told that a lot. We don't want everything to be on your permanent record. We want to give kids safe space to practice. That's one of the. That's one of the richest things about game-based learning, actually, which was again one of my original focuses in grad school. If a kid knows that everything they're doing is being monitored and evaluated, that's like crushing, you know, or at least it could be crushing, and it definitely will affect their performance in ways that we might not want. So there's a, there's a. There's a lot of nuance and difficulty there, but I think it's an immensely rich space for exploration.
Craig: 38:02
Yeah, and and currently, uh, a lot of uh. I mean, that's interesting, the thought of surveillance. I hadn't looked at it through that lens, Although if it's only in the classroom setting, you know, maybe kids would forget about it. Yeah, it's in the background, but one of the problems, too, that I ran into talking to people is a lot of the data that you want to collect is still pencils and paper, which makes it difficult. It all ends up in a waste bin. Do you think, as, as education becomes increasingly digital, just doing tests online or doing well, I guess, tests online is what I'm talking about or doing homework online or exercises online, as that becomes the norm, does that lead to stronger data flows for this kind of application?
Dylan: 39:26
The pandemic was definitely accelerant on the uptake of digital learning experiences and the data that result from those learning experiences, but it also put into really stark relief the limitations of those learning experiences and how impoverished they can be compared to a rich in-person learning experience. Similarly, with the print-to-digital thing, especially in K-5, but even up through higher education, there's a lot to be learned that cannot adequately be replicated in a purely digital context. So I think the burden is on technologists to think about ways to capture data from those in-person interactions. I alluded earlier to the idea of computer vision. Let the kids do the worksheet on paper, because they're six years old and they're holding these fat pencils and they're doing the little, doing like this, the best they can, and then use technology to make it so that you could still capture those data. Or when we were still Cadaptive, one of our partners was an early learning provider and they had a really good assessment framework, but it was all observational assessment because these are toddlers, they're not going to click anything, they're just going to drool on blocks and the adults who were caring for and supporting the learning of those toddlers were instead responding to observational rubrics. That, taken together, gave a really detailed, high-dimensional representation of learner progress.
Dylan: 40:53
We can do observational checklists for older kids too, especially for a lot of the skills that we really care about these days, like collaboration and creativity and and all of these. I mean it's ironic we still call them 21st century skills. We're like we're like 25% of the way through the 21st century and we're still trying to get uptake of these skills. Those kinds of skills don't lend themselves to multiple choice or flashcard sorts or things. They're about dialogue that's happening among real people in a classroom. So figure out how to use technology to evaluate those collaborations, not in intrusive ways where the kid has to like perform a conversation with friends, but in ways where the technology kind of recedes into the background and again eases the burden on the teacher to do that evaluation, but supports the teacher with insights.
Dylan: 41:41
Like, for instance, there's a company that does interesting work in college where they have breakout sessions, breakout rooms and college students are discussing some topic and there's a prompt and there's some structure to it and the system is observing how the students are responding to one another, taking up one another's ideas and building on them, you know, or introducing countervailing evidence or all of that stuff. Then they can provide summaries of all of those interactions back to the instructors. I might be teaching a survey course with 700 kids. I am not going to sit in 700 breakouts, but if I get some insights and the system sends me, you know, links to videos where I could like sort of drop in and watch how a particular interaction went, I could be like, oh wow, that is really cool.
Dylan: 42:26
That's another example of controlling the instructor's attention, right, Like you extract from a natural interaction the data that could be extracted from it and then you use those data to guide the instructor's attention where it can be most productively spent. I think those are excellent uses of these kinds of tools in educational contexts that don't they don't marginalize the instructor, they don't marginalize the human relationships. They they let us get right at that. Human stuff and the technology is is often the shadows, in the distance, you know, supporting it, not in the center.
Craig: 43:04
Yeah, let me ask you something different that that has occurred to me while you're talking that what you know, I bemoan the current state of of education in the United States. You know and I you know I bemoan the current state of education in the United States. You know and I you know a lot of it's the political choices that people are making because they don't seem to understand the underlying issues. A lot of it's just interacting with people who should be educated and don't know a lot. Some of it's these late-night talk shows that go out on the street and ask people you know these simple facts, or show them images of famous people, and they have no idea who they are. Uh, but in I I, I'm not sure that it's. It's worse now than it ever was. Uh, and I wonder, in a way, whether uh, it, it, it's.
Craig: 44:18
It's just, the focus of modern society has shifted, so a lot of the things that we used to think were important to know are no longer really important. I mean, I remember as a kid a little kid I'd go out to my mother, grew up on a ranch in Montana and I would hang out with the kids ranch kids from neighboring ranches, and there was one kid I was close with and we went camping one time and he asked me I was talking about something. I wanted to go to Europe, I think, and he asked me whether England is the capital of Europe. And I remember just being stunned that he wouldn't know that England is part of a country, it's not a city in the country, it isn't a capital and all that stuff. Not a city in the country isn't a capital, and all that stuff.
Craig: 45:33
So maybe it's education has not declined as much as just our exposure to people has increased and we're sort of realizing that there are huge pools of relatively uneducated people in the world. I mean, what do you think? Do you think that? And then, on the other side, just the access of information through social media and YouTube, I mean, unfortunately there's a lot of misinformation, but people know a lot more than they did in the fifties just because they have, I mean, the average person, they have so much information that's coming at them all the time. So do you have a sense of of that, of whether, uh, people are, people are less educated today, on crawl, as they say in French, or whether it's a problem that persists at the elite who can afford it whose parents understand the value of education.
Dylan: 46:41
You know, educate their kids, but but a lot of the world uh is still uh muddling along education in the 20th century, especially in places like the us, uh, increased the set of people whose success we cared about, and that's a good thing. And, like you say, you know the, the descendants of the elites who had the private tutors and the boarding schools and went to the universities are still having a similarly great educational trajectory. And if you measure the success of all the educated people, it used to be that it was a small circle and they were pretty much all pretty well resourced. And as the circle gets bigger, now you're including people who have much less access to that kind of support and so, yeah, the average achievement might go down, but that doesn't mean that the system is doing worse. In fact, in a sense it means the system is doing better because it's now reaching people who used to have zero and now they've got something.
Dylan: 47:45
There is a, there is a psychological effect called the Flynn effect. Do you know the Flynn effect? The effect is the fact. It's an empirical fact. It's not even there. I don't even know there's a consensus on why it happens.
Dylan: 47:59
But um, there has been a steady and unexplainable upward trend in scores on intelligence tests in the 20th century. That is definitely too fast to be like the human species is evolving, you know, and it's not like, oh, we're, we're eating better. It's like people cannot come up with a good reason why the scores are getting higher like that. But what we don't think is happening is that people are actually getting smarter. It's not like my grandparents were mentally disabled, you know. Instead, something else is going on, and one of the theories that I've heard is that intelligence tests have always been criticized as being kind of artificial and a weird kind of narrow set of skills that do predict success on various tasks but are also not reflective of a lot of things, for instance, that your friends on the ranch in Montana needed to do in their everyday lives. With tasks that are similar to those that people do on intelligence tests, the general capability to achieve those tasks has increased, and so the performance on the tests is going to go up, simply because what the test measure is more closely mapped to what the human brain is being asked to do day to day. I think that's a reasonable explanation for the Flynn effect. And, in general, yeah, society is moving at a faster pace and the rate at which we're asked to take in information, process the information, make choices based on the information, et cetera, has increased, so our ability to do so will increase, because that's how the human brain responds. It'll get better at the thing that is being asked to do. A lot.
Dylan: 49:32
By the same token, as you alluded to, there are things that we should no longer care whether our learners are good at For instance, our grandparents probably. If they took math in high school or college, they were expected to be able to use a slide rule fluently to compute square roots. And we do not have computation of square roots on the curriculum in any math program that I know of anymore, and the good reason for that is because that's a lame thing to spend kids' time getting good at. There's no reason to do it. You just understand conceptually what a square root is and use a system that will provide that square root for you. Cool, you're good. We need to continually do that reappraisal of the kind of core of our curricular ideals for our students and say is this still a relevant thing for learners to be able to know or do? And if not, you know what are new relevant things that we want to make sure our society is equipped to do? Does that make sense?
Craig: 50:26
Yeah, yeah, the uh. The other thing that uh, uh, I was in um Pittsburgh a couple of years ago, uh, and I have a friend who's a television reporter there and I went with him on a story. He was doing a story in a local school about CMU. Carnegie Mellon had a pilot program in the school using AI. It wasn't an AI tutor, as I recall, but it was some.
Dylan: 51:00
It might have been an intelligent tutoring system. Cmu was a great leader in the intelligent tutoring system space. Maybe, maybe, that was it.
Craig: 51:07
Yeah, and so this was during sort of a study hall, so there was a teacher present and the kids were supposed to be interacting with these. Uh, with this system and I, you know, I, I love the theory and I've written articles about how this is the, the promise of, of the future of education, and it's going to transform everything. And sitting there watching these kids and this was at a, a school that was fairly selective, so they were smart kids, but man, they, they, they would typically look at their screen for maybe 10 seconds and then they would be moving around and seeing what each other is doing and getting up and talking to their friends, and then the teacher's telling them to sit. I mean, there was just no focus at all.
Craig: 52:09
And um, and that that really struck me that, yeah, the theory is great, but unless you have students that that want to engage and that are interested in learning, not just in passing tests, it's moot. I mean, you can put all the fancy tools. So part of the problem is just, you know and I imagine if you're a teacher, I'm fascinated by teaching If you're in a room with 20 kids, there are probably five kids in there that you can see really want to learn and have that motivation.
Dylan: 52:59
I would dispute that. I would say all of the kids in there really want to learn something. There are five kids in there who might be really excited to learn the thing that you're supposed to be teaching them that day. Yeah, but thousands of years ago said the mind is not a vessel to be filled but a fire to be kindled, and that is 100% right. And the point you made about being able to observe these kids as they were engaging with this digital learning experience that touches on the point I made earlier.
Dylan: 53:30
In the context of continuous embedded assessment, we really have to deeply understand the context in which the data are collected to know whether they're junk data or realistic right? I sat in a class one time and it was this innovative math program that was asking the kids to calculate different units of volume how many quarts in a gallon, you know? And the kids were looking at each other and they're like what's a quart, you know? They didn't even know the terms and so they're just typing in random numbers and it's telling them that they're bad at math. They're not bad at math. This system is not measuring them in a meaningful way. So if you were to hook a system like that up to a holistic learner profile that were going to make claims about that kid's ability in math, you would be doing harm to that kid.
Dylan: 54:16
The careful observation of what's happening in the learning context and of the way that the learner is engaging or not engaging with the intended learning goal is crucial for all of this work. It was cool that you were able to be in a position where you could see those kids aren't learning what you think they're learning. You know they're not doing what you think they're doing and that's a really important part of the creation of a learning experience is how do you align the learner's goals with the pedagogical goals. Games, when done well, are really cool. That way, games are a great opportunity for kids to be intrinsically motivated to play around with an idea. And if you can capture that intrinsic motivation to explore, then you've got a great learning experience in the making. If instead, it's like a drill and kill that your teacher told you you have to do and it's lame and you're not interested, you're just going to kind of click through it desultorily. You're not going to get a good measure. You know kids not going to learn anything.
Craig: 55:12
You're wasting everybody's time uh, in in three minutes. Can you tell me, uh, what you guys are doing, if any, or what the promise of gamification is, because that's something that hasn't really come on stream in a big way? I can be in your gestures to it.
Dylan: 55:29
In three minutes. I wrote my dissertation on game-based learning and I could talk to you for hours about game-based learning. Play is the single most powerful innovation for learning in the history of the world. There is nothing else that even comes close. Play is so widespread across the animal kingdom that it must be tremendously useful for learning, because evolution does not keep traits that aren't useful right. So there's something demonstrably powerful about play. The better we can understand where that power is and how to leverage it in service of the learning goals that we think are important for learners, the more powerful the game-based learning, the play-based learning experiences, can be. When, instead, we just kind of schmaltz on some points and levels and stars to something, we're doing a disservice both to the power of play and to the autonomy and integrity of the learner.
Craig: 56:24
Even if you think it's a bit overhyped. Ai is suddenly everywhere, from self-driving cars to molecular medicine, to business efficiency. If it's not in your industry yet, it's coming fast. But AI needs a lot of speed and computing power. So how do you compete without costs spiraling out of control? Time to upgrade to the next generation of the cloud Oracle Cloud Infrastructure, or OCI. Oci is a blazing fast and secure platform for your infrastructure, database application development plus all your AI machine learning workloads. Oci costs 50% less for compute and 80% less for networking, so you're saving a pile of money. Thousands of businesses have already upgraded to OCI, including MGM Resorts, specialized Bikes and Fireworks AI. Right now, oracle is offering to cut your current cloud bill in half if you move to OCI. This is for new US customers, with minimum financial commitment. See if your company qualifies for this special offer at oraclecom slash ionai. That's ionai, all run together e-y-e-o-n-a-i. So go to oraclecom slash ionai to see if your company qualifies for this special offer.
Sonix is the world’s most advanced automated transcription, translation, and subtitling platform. Fast, accurate, and affordable.
Automatically convert your mp3 files to text (txt file), Microsoft Word (docx file), and SubRip Subtitle (srt file) in minutes.
Sonix has many features that you'd love including secure transcription and file storage, generate automated summaries powered by AI, advanced search, transcribe multiple languages, and easily transcribe your Zoom meetings. Try Sonix for free today.