Lorem Ipsum

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec vel libero at lectus rutrum vestibulum vitae ut turpis. Ut ultricies pulvinar posuere. Nulla rutrum, libero nec pharetra accumsan, enim leo blandit dui, ac bibendum augue dui sed justo. Interdum et malesuada fames ac ante ipsum primis in faucibus. Duis sit amet fringilla mauris. Ut pharetra, leo id venenatis cursus, libero sapien venenatis nisi, vel commodo lacus urna non nulla. Duis rutrum vestibulum ligula sed hendrerit. Ut tristique cursus odio, et vulputate orci fringilla nec. Proin tempus ipsum ut augue consectetur, in varius dolor bibendum. Proin at dapibus nisl.

Aliquam purus lectus, sodales et est vitae, ullamcorper scelerisque urna. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla feugiat, nunc nec gravida varius, nisl tellus dictum purus, a tristique purus lectus eget orci. Vivamus faucibus diam erat, vitae venenatis neque convallis vitae. Etiam eget iaculis arcu. Duis id nisl sapien. Aliquam erat volutpat. Interdum et malesuada fames ac ante ipsum primis in faucibus. Quisque luctus lorem a odio congue auctor. Suspendisse potenti. Nunc convallis, ante sit amet lobortis eleifend, orci dolor lacinia diam, quis luctus ante magna non sem. Phasellus pretium aliquam enim, a suscipit elit sodales vel. Proin tincidunt quis ipsum in condimentum. Vivamus molestie sodales erat et feugiat. Maecenas venenatis, leo in adipiscing commodo, eros tellus dapibus dui, in dignissim risus ligula id elit.

Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Nulla facilisi. Donec semper nisi non enim pulvinar venenatis. Vestibulum semper metus.

System fixes bugs by importing functionality

At the Association for Computing Machinery’s Programming Language Design and Implementation conference this month, MIT researchers presented a new system that repairs dangerous software bugs by automatically importing functionality from other, more secure applications.

Remarkably, the system, dubbed CodePhage, doesn’t require access to the source code of the applications whose functionality it’s borrowing. Instead, it analyzes the applications’ execution and characterizes the types of security checks they perform. As a consequence, it can import checks from applications written in programming languages other than the one in which the program it’s repairing was written.

Once it’s imported code into a vulnerable application, CodePhage can provide a further layer of analysis that guarantees that the bug has been repaired.

“We have tons of source code available in open-source repositories, millions of projects, and a lot of these projects implement similar specifications,” says Stelios Sidiroglou-Douskos, a research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) who led the development of CodePhage. “Even though that might not be the core functionality of the program, they frequently have subcomponents that share functionality across a large number of projects.”

With CodePhage, he says, “over time, what you’d be doing is building this hybrid system that takes the best components from all these implementations.”

Sidiroglou-Douskos and his coauthors — MIT professor of computer science and engineering Martin Rinard, graduate student Fan Long, and Eric Lahtinen, a researcher in Rinard’s group — refer to the program CodePhage is repairing as the “recipient” and the program whose functionality it’s borrowing as the “donor.” To begin its analysis, CodePhage requires two sample inputs: one that causes the recipient to crash and one that doesn’t. A bug-locating program that the same group reported in March, dubbed DIODE, generates crash-inducing inputs automatically. But a user may simply have found that trying to open a particular file caused a crash.

Carrying the past

First, CodePhage feeds the “safe” input — the one that doesn’t induce crashes — to the donor. It then tracks the sequence of operations the donor executes and records them using a symbolic expression, a string of symbols that describes the logical constraints the operations impose.

At some point, for instance, the donor may check to see whether the size of the input is below some threshold. If it is, CodePhage will add a term to its growing symbolic expression that represents the condition of being below that threshold. It doesn’t record the actual size of the file — just the constraint imposed by the check.

Next, CodePhage feeds the donor the crash-inducing input. Again, it builds up a symbolic expression that represents the operations the donor performs. When the new symbolic expression diverges from the old one, however, CodePhage interrupts the process. The divergence represents a constraint that the safe input met and the crash-inducing input does not. As such, it could be a security check missing from the recipient.

CodePhage then analyzes the recipient to find locations at which the input meets most, but not quite all, of the constraints described by the new symbolic expression. The recipient may perform different operations in a different order than the donor does, and it may store data in different forms. But the symbolic expression describes the state of the data after it’s been processed, not the processing itself.

Successful attacks against popular anonymity network

With 2.5 million daily users, the Tor network is the world’s most popular system for protecting Internet users’ anonymity. For more than a decade, people living under repressive regimes have used Tor to conceal their Web-browsing habits from electronic surveillance, and websites hosting content that’s been deemed subversive have used it to hide the locations of their servers.

Researchers at MIT and the Qatar Computing Research Institute (QCRI) have now demonstrated a vulnerability in Tor’s design. At the Usenix Security Symposium this summer, they will show that an adversary could infer a hidden server’s location, or the source of the information reaching a given Tor user, by analyzing the traffic patterns of encrypted data passing through a single computer in the all-volunteer Tor network.

Fortunately, the same paper also proposes defenses, which representatives of the Tor project say they are evaluating for possible inclusion in future versions of the Tor software.

“Anonymity is considered a big part of freedom of speech now,” says Albert Kwon, an MIT graduate student in electrical engineering and computer science and one of the paper’s first authors. “The Internet Engineering Task Force is trying to develop a human-rights standard for the Internet, and as part of their definition of freedom of expression, they include anonymity. If you’re fully anonymous, you can say what you want about an authoritarian government without facing persecution.”

Layer upon layer

Sitting atop the ordinary Internet, the Tor network consists of Internet-connected computers on which users have installed the Tor software. If a Tor user wants to, say, anonymously view the front page of The New York Times, his or her computer will wrap a Web request in several layers of encryption and send it to another Tor-enabled computer, which is selected at random. That computer — known as the guard — will peel off the first layer of encryption and forward the request to another randomly selected computer in the network. That computer peels off the next layer of encryption, and so on.

The last computer in the chain, called the exit, peels off the final layer of encryption, exposing the request’s true destination: the Times. The guard knows the Internet address of the sender, and the exit knows the Internet address of the destination site, but no computer in the chain knows both. This routing scheme, with its successive layers of encryption, is known as onion routing, and it gives the network its name: “Tor” is an acronym for “the onion router.”

In addition to anonymous Internet browsing, however, Tor also offers what it calls hidden services. A hidden service protects the anonymity of not just the browser, but the destination site, too. Say, for instance, that someone in Iran wishes to host a site archiving news reports from Western media but doesn’t want it on the public Internet. Using the Tor software, the host’s computer identifies Tor routers that it will use as “introduction points” for anyone wishing to access its content. It broadcasts the addresses of those introduction points to the network, without revealing its own location.

If another Tor user wants to browse the hidden site, both his or her computer and the host’s computer build Tor-secured links to the introduction point, creating what the Tor project calls a “circuit.” Using the circuit, the browser and host identify yet another router in the Tor network, known as a rendezvous point, and build a second circuit through it. The location of the rendezvous point, unlike that of the introduction point, is kept private.

Traffic fingerprinting

Kwon devised an attack on this system with joint first author Mashael AlSabah, an assistant professor of computer science at Qatar University, a researcher at QCRI, and, this year, a visiting scientist at MIT; Srini Devadas, the Edwin Sibley Webster Professor in MIT’s Department of Electrical Engineering and Computer Science; David Lazar, another graduate student in electrical engineering and computer science; and QCRI’s Marc Dacier.

The researchers’ attack requires that the adversary’s computer serve as the guard on a Tor circuit. Since guards are selected at random, if an adversary connects enough computers to the Tor network, the odds are high that, at least on some occasions, one or another of them would be well-positioned to snoop.

During the establishment of a circuit, computers on the Tor network have to pass a lot of data back and forth. The researchers showed that simply by looking for patterns in the number of packets passing in each direction through a guard, machine-learning algorithms could, with 99 percent accuracy, determine whether the circuit was an ordinary Web-browsing circuit, an introduction-point circuit, or a rendezvous-point circuit. Breaking Tor’s encryption wasn’t necessary.

Memory management scheme could help enable chips

In a modern, multicore chip, every core — or processor — has its own small memory cache, where it stores frequently used data. But the chip also has a larger, shared cache, which all the cores can access.

If one core tries to update data in the shared cache, other cores working on the same data need to know. So the shared cache keeps a directory of which cores have copies of which data.

That directory takes up a significant chunk of memory: In a 64-core chip, it might be 12 percent of the shared cache. And that percentage will only increase with the core count. Envisioned chips with 128, 256, or even 1,000 cores will need a more efficient way of maintaining cache coherence.

At the International Conference on Parallel Architectures and Compilation Techniques in October, MIT researchers unveil the first fundamentally new approach to cache coherence in more than three decades. Whereas with existing techniques, the directory’s memory allotment increases in direct proportion to the number of cores, with the new approach, it increases according to the logarithm of the number of cores.

In a 128-core chip, that means that the new technique would require only one-third as much memory as its predecessor. With Intel set to release a 72-core high-performance chip in the near future, that’s a more than hypothetical advantage. But with a 256-core chip, the space savings rises to 80 percent, and with a 1,000-core chip, 96 percent.

When multiple cores are simply reading data stored at the same location, there’s no problem. Conflicts arise only when one of the cores needs to update the shared data. With a directory system, the chip looks up which cores are working on that data and sends them messages invalidating their locally stored copies of it.

“Directories guarantee that when a write happens, no stale copies of the data exist,” says Xiangyao Yu, an MIT graduate student in electrical engineering and computer science and first author on the new paper. “After this write happens, no read to the previous version should happen. So this write is ordered after all the previous reads in physical-time order.”

Time travel

What Yu and his thesis advisor — Srini Devadas, the Edwin Sibley Webster Professor in MIT’s Department of Electrical Engineering and Computer Science — realized was that the physical-time order of distributed computations doesn’t really matter, so long as their logical-time order is preserved. That is, core A can keep working away on a piece of data that core B has since overwritten, provided that the rest of the system treats core A’s work as having preceded core B’s.

The ingenuity of Yu and Devadas’ approach is in finding a simple and efficient means of enforcing a global logical-time ordering. “What we do is we just assign time stamps to each operation, and we make sure that all the operations follow that time stamp order,” Yu says.

With Yu and Devadas’ system, each core has its own counter, and each data item in memory has an associated counter, too. When a program launches, all the counters are set to zero. When a core reads a piece of data, it takes out a “lease” on it, meaning that it increments the data item’s counter to, say, 10. As long as the core’s internal counter doesn’t exceed 10, its copy of the data is valid. (The particular numbers don’t matter much; what matters is their relative value.)

Algorithm magnifies motions indiscernible to naked eye

For several years now, the research groups of MIT professors of computer science and engineering William Freeman and Frédo Durand have been investigating techniques for amplifying movements captured by video but indiscernible to the human eye. Versions of their algorithms can make the human pulse visible and even recover intelligible speech from the vibrations of objects filmed through soundproof glass.

Earlier this month, at the Computer Vision and Pattern Recognition conference, Freeman, Durand, and colleagues at the Qatar Computing Research Institute (QCRI) presented a new version of the algorithm that can amplify small motions even when they’re contained within objects executing large motions. So, for instance, it could make visible the precise sequence of muscle contractions in the arms of a baseball player swinging the bat, or in the legs of a soccer player taking a corner kick.

“The previous version of the algorithm assumed everything was small in the video,” Durand says. “Now we want to be able magnify small motions that are hidden within large motions. The basic idea is to try to cancel the large motion and go back to the previous situation.”

Canceling the large motion means determining which pixels of successive frames of video belong to a moving object and which belong to the background. As Durand explains, that problem becomes particularly acute at the object’s boundaries.

If a digital camera captures an image of, say, a red object against a blue background, some of its photosensors will register red light, and some will register blue. But the sensors corresponding to the object’s boundaries may in fact receive light from both foreground and background, so they’ll register varying shades of purple.

IRobot provide workshop for students

Thirty-seven middle school students from Boston, Cambridge, and Lawrence, Massachusetts, participated recently in a hands-on robotics workshop with 27 undergraduate student, graduate student, and young professional mentors at MIT. Engineers from iRobot joined the students and mentors to demonstrate several of their products, ranging from the popular Roomba vacuum cleaning robot to more advanced robots that facilitate remote collaboration and provide situational awareness in military settings.

The workshop – part of the STEM Mentoring Program hosted by the MIT Office of Engineering Outreach Programs – gave students a glimpse into the complexity of programming robots. “Robots don’t start out with minds of their own,” says STEM Program Coordinator Catherine Park. “There is a lot of work that goes into enabling robots to do the things they do.”

Along with learning about iRobot products, students and and their mentors took part in an activity that demonstrated some basic principles of programming. The group worked in teams to write pseudo-codes and then followed those codes to traverse a grid and pick up items, much like the Roomba does.

Students left with a broader understanding of robots and the work that engineers do. “It’s empowering for students to learn about programming robots because it can help them view themselves as builders of technology rather than mere consumers,” Park says. “I hope this day brought robots from their imagination to reality.”

Identify students at risk for dropping out

Massive open online courses — grant huge numbers of people access to world-class educational resources, but they also suffer high rates of attrition.

To some degree, that’s inevitable: Many people who enroll in MOOCs may have no interest in doing homework, but simply plan to listen to video lectures in their spare time.

Others, however, may begin courses with the firm intention of completing them but get derailed by life’s other demands. Identifying those people before they drop out and providing them with extra help could make their MOOC participation much more productive.

The problem is that you don’t know who’s actually dropped out — or, in MOOC parlance, “stopped out” — until the MOOC has been completed. One missed deadline does not a stopout make; but after the second or third missed deadline, it may be too late for an intervention to do any good.

Last week, at the International Conference on Artificial Intelligence in Education, MIT researchers showed that a dropout-prediction model trained on data from one offering of a course can help predict which students will stop out of the next offering. The prediction remains fairly accurate even if the organization of the course changes, so that the data collected during one offering doesn’t exactly match the data collected during the next.

“There’s a known area in machine learning called transfer learning, where you train a machine-learning model in one environment and see what you have to do to adapt it to a new environment,” says Kalyan Veeramachaneni, a research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory who conducted the study together with Sebastien Boyer, a graduate student in MIT’s Technology and Policy Program. “Because if you’re not able to do that, then the model isn’t worth anything, other than the insight it may give you. It cannot be used for real-time prediction.”

Generic descriptors

Veeramachaneni and Boyer’s first step was to develop a set of variables that would allow them to compare data collected during different offerings of the same course — or, indeed, offerings of different courses. These include things such as average time spent per correct homework problem and amount of time spent with video lectures or other resources.

Next, for each of three different offerings of the same course, they normalized the raw values of those variables against the class averages. So, for instance, a student who spent two hours a week watching videos where the class average was three would have a video-watching score of 0.67, while a student who spent four hours a week watching videos would have a score of 1.33.

They ran the normalized data for the first course offering through a machine-learning algorithm that tried to find correlations between particular values of the variables and stopout. Then they used those correlations to try to predict stopout in the next two offerings of the course. They repeated the process with the second course offering, using the resulting model to predict stopout in the third.

Tipping the balance

Already, the model’s predictions were fairly accurate. But Veeramachaneni and Boyer hoped to do better. They tried several different techniques to improve the model’s accuracy, but the one that fared best is called importance sampling. For each student enrolled in, say, the second offering of the course, they found the student in the first offering who provided the closest match, as determined by a “distance function” that factored in all the variables. Then, according to the closeness of the match, they gave the statistics on the student from the first offering a greater weight during the machine-learning process.

Memory without sacrificing speed

Random-access memory, or RAM, is where computers like to store the data they’re working on. A processor can retrieve data from RAM tens of thousands of times more rapidly than it can from the computer’s disk drive.

But in the age of big data, data sets are often much too large to fit in a single computer’s RAM. Sequencing data describing a single large genome could take up the RAM of somewhere between 40 and 100 typical computers.

Flash memory — the type of memory used by most portable devices — could provide an alternative to conventional RAM for big-data applications. It’s about a tenth as expensive, and it consumes about a tenth as much power.

The problem is that it’s also a tenth as fast. But at the International Symposium on Computer Architecture in June, MIT researchers presented a new system that, for several common big-data applications, should make servers using flash memory as efficient as those using conventional RAM, while preserving their power and cost savings.

The researchers also presented experimental evidence showing that, if the servers executing a distributed computation have to go to disk for data even 5 percent of the time, their performance falls to a level that’s comparable with flash, anyway.

In other words, even without the researchers’ new techniques for accelerating data retrieval from flash memory, 40 servers with 10 terabytes’ worth of RAM couldn’t handle a 10.5-terabyte computation any better than 20 servers with 20 terabytes’ worth of flash memory, which would consume only a fraction as much power.

“This is not a replacement for DRAM [dynamic RAM] or anything like that,” says Arvind, the Johnson Professor of Computer Science and Engineering at MIT, whose group performed the new work. “But there may be many applications that can take advantage of this new style of architecture. Which companies recognize: Everybody’s experimenting with different aspects of flash. We’re just trying to establish another point in the design space.”

Joining Arvind on the new paper are Sang Woo Jun and Ming Liu, MIT graduate students in computer science and engineering and joint first authors; their fellow grad student Shuotao Xu; Sungjin Lee, a postdoc in Arvind’s group; Myron King and Jamey Hicks, who did their PhDs with Arvind and were researchers at Quanta Computer when the new system was developed; and one of their colleagues from Quanta, John Ankcorn — who is also an MIT alumnus.

Outsourced computation

The researchers were able to make a network of flash-based servers competitive with a network of RAM-based servers by moving a little computational power off of the servers and onto the chips that control the flash drives. By preprocessing some of the data on the flash drives before passing it back to the servers, those chips can make distributed computation much more efficient. And since the preprocessing algorithms are wired into the chips, they dispense with the computational overhead associated with running an operating system, maintaining a file system, and the like.

With hardware contributed by some of their sponsors — Quanta, Samsung, and Xilinx — the researchers built a prototype network of 20 servers. Each server was connected to a field-programmable gate array, or FPGA, a kind of chip that can be reprogrammed to mimic different types of electrical circuits. Each FPGA, in turn, was connected to two half-terabyte — or 500-gigabyte — flash chips and to the two FPGAs nearest it in the server rack.

Intelligence Lab work together more effectively in the face of uncertainty

If companies like Amazon and Google have their way, soon enough we will have robots air-dropping supplies from the sky. But is our software where it needs to be to move and deliver goods in the real world?

This question has been explored for many years by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), who have worked on scenarios inspired by domains ranging from factory floors to drone delivery.

At the recent Robotics Science and Systems (RSS) conference, a CSAIL team presented a new system of three robots that can work together to deliver items quickly, accurately and, perhaps most importantly, in unpredictable environments. The team says its models could extend to a variety of other applications, including hospitals, disaster situations, and even restaurants and bars.

To demonstrate their approach, the CSAIL researchers converted their lab into a miniature “bar” that included a PR2 robot “bartender” and two four-wheeled Turtlebot robots that would go into the different offices and ask the human participants for drink orders. The Turtlebots then reasoned about which orders were required in the different rooms and when other robots may have delivered drinks, in order to search most efficiently for new orders and deliver the items to the spaces.

The team’s techniques reflect state-of-the-art planning algorithms that allow groups of robots to perform tasks given little more than a high-level description of the general problem to be solved.

The RSS paper, which was named a Best Paper Finalist, was co-authored by Duke University professor and former CSAIL postdoc George Konidaris, MIT graduate students Ariel Anders and Gabriel Cruz, MIT professors Jonathan How and Leslie Kaelbling, and lead author Chris Amato, a former CSAIL postdoc who is now a professor at the University of New Hampshire.

Humanity’s one certainty: uncertainty

One of the big challenges in getting robots to work together is the fact that the human world is full of so much uncertainty.

More specifically, robots deal with three kinds of uncertainty, related to sensors, outcomes, and communications.

“Each robot’s sensors get less-than-perfect information about the location and status of both themselves and the things around them,” Amato says. “As for outcomes, a robot may drop items when trying to pick them up or take longer than expected to navigate. And, on top of that, robots often are not able to communicate with one another, either because of communication noise or because they are out of range.”

These uncertainties were reflected in the team’s delivery task: among other things, the supply robot could serve only one waiter robot at a time, and the robots were unable to communicate with one another unless they were in close proximity. Communication difficulties such as this are a particular risk in disaster-relief or battlefield scenarios.

“These limitations mean that the robots don’t know what the other robots are doing or what the other orders are,” Anders says. “It forced us to work on more complex planning algorithms that allow the robots to engage in higher-level reasoning about their location, status, and behavior.”

Lets novices do in minutes

The technology behind 3-D printing is growing more and more common, but the ability to create designs for it is not. Any but the simplest designs require expertise with computer-aided design (CAD) applications, and even for the experts, the design process is immensely time consuming.

Researchers at MIT and the Interdisciplinary Center Herzliya in Israel aim to change that, with a new system that automatically turns CAD files into visual models that users can modify in real time, simply by moving virtual sliders on a Web page. Once the design meets the user’s specifications, he or she hits the print button to send it to a 3-D printer.

“We envision a world where everything you buy can potentially be customized, and technologies such as 3-D printing promise that that might be cost-effective,” says Masha Shugrina, an MIT graduate student in computer science and engineering and one of the new system’s designers. “So the question we set out to answer was, ‘How do you actually allow people to modify digital designs in a way that keeps them functional?’”

For a CAD user, modifying a design means changing numerical values in input fields and then waiting for as much as a minute while the program recalculates the geometry of the associated object.

Once the design is finalized, it has to be tested using simulation software. For designs intended for 3-D printers, compliance with the printers’ specifications is one such test. But designers typically test their designs for structural stability and integrity as well. Those tests can take anywhere from several minutes to several hours, and they need to be rerun every time the design changes.

Advance work

Shugrina and her collaborators — her thesis advisor, Wojciech Matusik, an associate professor of electrical engineering and computer science at MIT, and Ariel Shamir of IDC Herzliya — are trying to turn visual design into something novices can do in real time. They presented their new system, dubbed “Fab Forms,” at the Association for Computing Machinery’s Siggraph conference, in August.

Fab Forms begins with a design created by a seasoned CAD user. It then sweeps through a wide range of values for the design’s parameters — the numbers that a CAD user would typically change by hand — calculating the resulting geometries and storing them in a database.

For each of those geometries, the system also runs a battery of tests, specified by the designer, and it again stores the results. The whole process would take hundreds of hours on a single computer, but in their experiments, the researchers distributed the tasks among servers in the cloud.

Minuscule vibrations in structures to detect

For Justin Chen, a PhD student in the MIT Department of Civil and Environmental Engineering (CEE), there is more to observe in the built environment than meets the eye. So much more, in fact, that he has designed his entire academic attention in CEE to center on structural health monitoring.

“Everyday, people drive on bridges, enter buildings, obtain water through infrastructure, and so on,” Chen says. “The central question my collaborators and I are trying to answer is: How do we keep infrastructure operational, even when it’s battered by the elements?”

Although most would describe buildings as completely static, Chen says his work reveals structural movement the naked eye alone cannot perceive. Using a computer vision technique called motion magnification, Chen and his colleagues successfully catch imperceptibly tiny vibrations in structures.

This technique, Chen explains, will allow engineers to monitor the health state of this infrastructure, maintain it, and ultimately improve the sustainability of future infrastructure worldwide.

Now in his fifth year, Chen works in collaboration with MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). He devises algorithms that observe small structural motions from videos on a research project sponsored by Shell through the MIT Energy Initiative (MITEI) with the principal investigators, Professor Oral Buyukozturk of CEE and Professor Bill Freeman of CSAIL and the Department of Electrical Engineering and Computer Science. Chen’s research will further contribute to a more comprehensive understanding of construction methods and materials for sustainable infrastructure by providing data to building managers who can arrange for repairs or more in-depth inspections.

By detecting more severe and costly damage and catching it at a less severe state, total repair costs could be reduced, and the service life of the structure improved, benefitting critical civil and industrial infrastructure. Additionally, any information Chen’s team collects on a building’s behavior has the potential to contribute to design changes for better reliability and lifespan of future buildings.

Chen received his undergraduate degree in physics from Caltech in 2009. While working with MIT’s Lincoln Laboratory from 2009-2010, Chen realized his fascination with laser vibrometry and later jumped at the opportunity to work with Buyukozturk as a PhD student on his National Science Foundation (NSF)-supported project for measuring defects in concrete. He recently spoke with CEE about his work:

Q: What are the real world implications of your research?

A: For the past five years, I’ve worked on the challenges of structural health monitoring and non-destructive testing for the condition assessment of infrastructure in the context of two different projects: An NSF- and American Society for Nondestructive Testing-supported project on using laser vibrometry to measure defects in fiberglass reinforced concrete; and most recently a Shell and MITEI sponsored project called BeeView, which employs distributed sensing and motion magnification for detecting damage in structures.

In BeeView, we are attempting to uncover damage in buildings through sensors that measure the structural vibrations of buildings, which we then use to deduce the level of structural damage. My particular focus is using cameras to measure the vibrations of buildings, bridges, and other structures.

When you look at buildings, they’re fixed, stationary. With CSAIL, we’ve been able to use algorithms to observe structural motions from videos. From those small motions, we can get our displacements of these buildings and process them with other damage-detection algorithms.

Our work serves as an early warning for people who maintain these buildings. Theoretically, when we suspect damage, we can pass that information on to those who will repair the buildings and use the lessons learned to construct more sustainable infrastructure in the future.

Over the course of our exploration, we’ve built a model structure in our basement laboratory that we’ve measured with accelerometers and other sensors as a test-bed for our damage detection algorithms.

At the end of this project, we plan to have developed a piece of software, compatible with the camera, that analyzes a structure and determines how it’s changed over time.

Q: What opportunities have you had to delve deeper into your research?

A: In January, I traveled to Houston, Texas, along with other Shell-MITEI fellows, at the invitation of our sponsor, Shell. All of the fellows were invited to visit the testing facilities and explore the current research. It was one of the most memorable experiences during my time in Course 1 [CEE]. I learned about the oil industry — a sector with which I had no previous experience — and how my research with vibration analysis can be used to solve their future challenges.

In the oil business, there is a lot of infrastructure — pipelines, oil rigs, and refineries — which all need to be operational and protected. With the camera, I could measure those facilities and help to maintain and sustain their function.