Monthly Archives: November 2016

Identify students at risk for dropping out

Massive open online courses — grant huge numbers of people access to world-class educational resources, but they also suffer high rates of attrition.

To some degree, that’s inevitable: Many people who enroll in MOOCs may have no interest in doing homework, but simply plan to listen to video lectures in their spare time.

Others, however, may begin courses with the firm intention of completing them but get derailed by life’s other demands. Identifying those people before they drop out and providing them with extra help could make their MOOC participation much more productive.

The problem is that you don’t know who’s actually dropped out — or, in MOOC parlance, “stopped out” — until the MOOC has been completed. One missed deadline does not a stopout make; but after the second or third missed deadline, it may be too late for an intervention to do any good.

Last week, at the International Conference on Artificial Intelligence in Education, MIT researchers showed that a dropout-prediction model trained on data from one offering of a course can help predict which students will stop out of the next offering. The prediction remains fairly accurate even if the organization of the course changes, so that the data collected during one offering doesn’t exactly match the data collected during the next.

“There’s a known area in machine learning called transfer learning, where you train a machine-learning model in one environment and see what you have to do to adapt it to a new environment,” says Kalyan Veeramachaneni, a research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory who conducted the study together with Sebastien Boyer, a graduate student in MIT’s Technology and Policy Program. “Because if you’re not able to do that, then the model isn’t worth anything, other than the insight it may give you. It cannot be used for real-time prediction.”

Generic descriptors

Veeramachaneni and Boyer’s first step was to develop a set of variables that would allow them to compare data collected during different offerings of the same course — or, indeed, offerings of different courses. These include things such as average time spent per correct homework problem and amount of time spent with video lectures or other resources.

Next, for each of three different offerings of the same course, they normalized the raw values of those variables against the class averages. So, for instance, a student who spent two hours a week watching videos where the class average was three would have a video-watching score of 0.67, while a student who spent four hours a week watching videos would have a score of 1.33.

They ran the normalized data for the first course offering through a machine-learning algorithm that tried to find correlations between particular values of the variables and stopout. Then they used those correlations to try to predict stopout in the next two offerings of the course. They repeated the process with the second course offering, using the resulting model to predict stopout in the third.

Tipping the balance

Already, the model’s predictions were fairly accurate. But Veeramachaneni and Boyer hoped to do better. They tried several different techniques to improve the model’s accuracy, but the one that fared best is called importance sampling. For each student enrolled in, say, the second offering of the course, they found the student in the first offering who provided the closest match, as determined by a “distance function” that factored in all the variables. Then, according to the closeness of the match, they gave the statistics on the student from the first offering a greater weight during the machine-learning process.

Memory without sacrificing speed

Random-access memory, or RAM, is where computers like to store the data they’re working on. A processor can retrieve data from RAM tens of thousands of times more rapidly than it can from the computer’s disk drive.

But in the age of big data, data sets are often much too large to fit in a single computer’s RAM. Sequencing data describing a single large genome could take up the RAM of somewhere between 40 and 100 typical computers.

Flash memory — the type of memory used by most portable devices — could provide an alternative to conventional RAM for big-data applications. It’s about a tenth as expensive, and it consumes about a tenth as much power.

The problem is that it’s also a tenth as fast. But at the International Symposium on Computer Architecture in June, MIT researchers presented a new system that, for several common big-data applications, should make servers using flash memory as efficient as those using conventional RAM, while preserving their power and cost savings.

The researchers also presented experimental evidence showing that, if the servers executing a distributed computation have to go to disk for data even 5 percent of the time, their performance falls to a level that’s comparable with flash, anyway.

In other words, even without the researchers’ new techniques for accelerating data retrieval from flash memory, 40 servers with 10 terabytes’ worth of RAM couldn’t handle a 10.5-terabyte computation any better than 20 servers with 20 terabytes’ worth of flash memory, which would consume only a fraction as much power.

“This is not a replacement for DRAM [dynamic RAM] or anything like that,” says Arvind, the Johnson Professor of Computer Science and Engineering at MIT, whose group performed the new work. “But there may be many applications that can take advantage of this new style of architecture. Which companies recognize: Everybody’s experimenting with different aspects of flash. We’re just trying to establish another point in the design space.”

Joining Arvind on the new paper are Sang Woo Jun and Ming Liu, MIT graduate students in computer science and engineering and joint first authors; their fellow grad student Shuotao Xu; Sungjin Lee, a postdoc in Arvind’s group; Myron King and Jamey Hicks, who did their PhDs with Arvind and were researchers at Quanta Computer when the new system was developed; and one of their colleagues from Quanta, John Ankcorn — who is also an MIT alumnus.

Outsourced computation

The researchers were able to make a network of flash-based servers competitive with a network of RAM-based servers by moving a little computational power off of the servers and onto the chips that control the flash drives. By preprocessing some of the data on the flash drives before passing it back to the servers, those chips can make distributed computation much more efficient. And since the preprocessing algorithms are wired into the chips, they dispense with the computational overhead associated with running an operating system, maintaining a file system, and the like.

With hardware contributed by some of their sponsors — Quanta, Samsung, and Xilinx — the researchers built a prototype network of 20 servers. Each server was connected to a field-programmable gate array, or FPGA, a kind of chip that can be reprogrammed to mimic different types of electrical circuits. Each FPGA, in turn, was connected to two half-terabyte — or 500-gigabyte — flash chips and to the two FPGAs nearest it in the server rack.