Monthly Archives: December 2016

System fixes bugs by importing functionality

At the Association for Computing Machinery’s Programming Language Design and Implementation conference this month, MIT researchers presented a new system that repairs dangerous software bugs by automatically importing functionality from other, more secure applications.

Remarkably, the system, dubbed CodePhage, doesn’t require access to the source code of the applications whose functionality it’s borrowing. Instead, it analyzes the applications’ execution and characterizes the types of security checks they perform. As a consequence, it can import checks from applications written in programming languages other than the one in which the program it’s repairing was written.

Once it’s imported code into a vulnerable application, CodePhage can provide a further layer of analysis that guarantees that the bug has been repaired.

“We have tons of source code available in open-source repositories, millions of projects, and a lot of these projects implement similar specifications,” says Stelios Sidiroglou-Douskos, a research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) who led the development of CodePhage. “Even though that might not be the core functionality of the program, they frequently have subcomponents that share functionality across a large number of projects.”

With CodePhage, he says, “over time, what you’d be doing is building this hybrid system that takes the best components from all these implementations.”

Sidiroglou-Douskos and his coauthors — MIT professor of computer science and engineering Martin Rinard, graduate student Fan Long, and Eric Lahtinen, a researcher in Rinard’s group — refer to the program CodePhage is repairing as the “recipient” and the program whose functionality it’s borrowing as the “donor.” To begin its analysis, CodePhage requires two sample inputs: one that causes the recipient to crash and one that doesn’t. A bug-locating program that the same group reported in March, dubbed DIODE, generates crash-inducing inputs automatically. But a user may simply have found that trying to open a particular file caused a crash.

Carrying the past

First, CodePhage feeds the “safe” input — the one that doesn’t induce crashes — to the donor. It then tracks the sequence of operations the donor executes and records them using a symbolic expression, a string of symbols that describes the logical constraints the operations impose.

At some point, for instance, the donor may check to see whether the size of the input is below some threshold. If it is, CodePhage will add a term to its growing symbolic expression that represents the condition of being below that threshold. It doesn’t record the actual size of the file — just the constraint imposed by the check.

Next, CodePhage feeds the donor the crash-inducing input. Again, it builds up a symbolic expression that represents the operations the donor performs. When the new symbolic expression diverges from the old one, however, CodePhage interrupts the process. The divergence represents a constraint that the safe input met and the crash-inducing input does not. As such, it could be a security check missing from the recipient.

CodePhage then analyzes the recipient to find locations at which the input meets most, but not quite all, of the constraints described by the new symbolic expression. The recipient may perform different operations in a different order than the donor does, and it may store data in different forms. But the symbolic expression describes the state of the data after it’s been processed, not the processing itself.

Algorithm magnifies motions indiscernible to naked eye

For several years now, the research groups of MIT professors of computer science and engineering William Freeman and Frédo Durand have been investigating techniques for amplifying movements captured by video but indiscernible to the human eye. Versions of their algorithms can make the human pulse visible and even recover intelligible speech from the vibrations of objects filmed through soundproof glass.

Earlier this month, at the Computer Vision and Pattern Recognition conference, Freeman, Durand, and colleagues at the Qatar Computing Research Institute (QCRI) presented a new version of the algorithm that can amplify small motions even when they’re contained within objects executing large motions. So, for instance, it could make visible the precise sequence of muscle contractions in the arms of a baseball player swinging the bat, or in the legs of a soccer player taking a corner kick.

“The previous version of the algorithm assumed everything was small in the video,” Durand says. “Now we want to be able magnify small motions that are hidden within large motions. The basic idea is to try to cancel the large motion and go back to the previous situation.”

Canceling the large motion means determining which pixels of successive frames of video belong to a moving object and which belong to the background. As Durand explains, that problem becomes particularly acute at the object’s boundaries.

If a digital camera captures an image of, say, a red object against a blue background, some of its photosensors will register red light, and some will register blue. But the sensors corresponding to the object’s boundaries may in fact receive light from both foreground and background, so they’ll register varying shades of purple.

IRobot provide workshop for students

Thirty-seven middle school students from Boston, Cambridge, and Lawrence, Massachusetts, participated recently in a hands-on robotics workshop with 27 undergraduate student, graduate student, and young professional mentors at MIT. Engineers from iRobot joined the students and mentors to demonstrate several of their products, ranging from the popular Roomba vacuum cleaning robot to more advanced robots that facilitate remote collaboration and provide situational awareness in military settings.

The workshop – part of the STEM Mentoring Program hosted by the MIT Office of Engineering Outreach Programs – gave students a glimpse into the complexity of programming robots. “Robots don’t start out with minds of their own,” says STEM Program Coordinator Catherine Park. “There is a lot of work that goes into enabling robots to do the things they do.”

Along with learning about iRobot products, students and and their mentors took part in an activity that demonstrated some basic principles of programming. The group worked in teams to write pseudo-codes and then followed those codes to traverse a grid and pick up items, much like the Roomba does.

Students left with a broader understanding of robots and the work that engineers do. “It’s empowering for students to learn about programming robots because it can help them view themselves as builders of technology rather than mere consumers,” Park says. “I hope this day brought robots from their imagination to reality.”