Monthly Archives: September 2016

Lets novices do in minutes

The technology behind 3-D printing is growing more and more common, but the ability to create designs for it is not. Any but the simplest designs require expertise with computer-aided design (CAD) applications, and even for the experts, the design process is immensely time consuming.

Researchers at MIT and the Interdisciplinary Center Herzliya in Israel aim to change that, with a new system that automatically turns CAD files into visual models that users can modify in real time, simply by moving virtual sliders on a Web page. Once the design meets the user’s specifications, he or she hits the print button to send it to a 3-D printer.

“We envision a world where everything you buy can potentially be customized, and technologies such as 3-D printing promise that that might be cost-effective,” says Masha Shugrina, an MIT graduate student in computer science and engineering and one of the new system’s designers. “So the question we set out to answer was, ‘How do you actually allow people to modify digital designs in a way that keeps them functional?’”

For a CAD user, modifying a design means changing numerical values in input fields and then waiting for as much as a minute while the program recalculates the geometry of the associated object.

Once the design is finalized, it has to be tested using simulation software. For designs intended for 3-D printers, compliance with the printers’ specifications is one such test. But designers typically test their designs for structural stability and integrity as well. Those tests can take anywhere from several minutes to several hours, and they need to be rerun every time the design changes.

Advance work

Shugrina and her collaborators — her thesis advisor, Wojciech Matusik, an associate professor of electrical engineering and computer science at MIT, and Ariel Shamir of IDC Herzliya — are trying to turn visual design into something novices can do in real time. They presented their new system, dubbed “Fab Forms,” at the Association for Computing Machinery’s Siggraph conference, in August.

Fab Forms begins with a design created by a seasoned CAD user. It then sweeps through a wide range of values for the design’s parameters — the numbers that a CAD user would typically change by hand — calculating the resulting geometries and storing them in a database.

For each of those geometries, the system also runs a battery of tests, specified by the designer, and it again stores the results. The whole process would take hundreds of hours on a single computer, but in their experiments, the researchers distributed the tasks among servers in the cloud.

Minuscule vibrations in structures to detect

For Justin Chen, a PhD student in the MIT Department of Civil and Environmental Engineering (CEE), there is more to observe in the built environment than meets the eye. So much more, in fact, that he has designed his entire academic attention in CEE to center on structural health monitoring.

“Everyday, people drive on bridges, enter buildings, obtain water through infrastructure, and so on,” Chen says. “The central question my collaborators and I are trying to answer is: How do we keep infrastructure operational, even when it’s battered by the elements?”

Although most would describe buildings as completely static, Chen says his work reveals structural movement the naked eye alone cannot perceive. Using a computer vision technique called motion magnification, Chen and his colleagues successfully catch imperceptibly tiny vibrations in structures.

This technique, Chen explains, will allow engineers to monitor the health state of this infrastructure, maintain it, and ultimately improve the sustainability of future infrastructure worldwide.

Now in his fifth year, Chen works in collaboration with MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). He devises algorithms that observe small structural motions from videos on a research project sponsored by Shell through the MIT Energy Initiative (MITEI) with the principal investigators, Professor Oral Buyukozturk of CEE and Professor Bill Freeman of CSAIL and the Department of Electrical Engineering and Computer Science. Chen’s research will further contribute to a more comprehensive understanding of construction methods and materials for sustainable infrastructure by providing data to building managers who can arrange for repairs or more in-depth inspections.

By detecting more severe and costly damage and catching it at a less severe state, total repair costs could be reduced, and the service life of the structure improved, benefitting critical civil and industrial infrastructure. Additionally, any information Chen’s team collects on a building’s behavior has the potential to contribute to design changes for better reliability and lifespan of future buildings.

Chen received his undergraduate degree in physics from Caltech in 2009. While working with MIT’s Lincoln Laboratory from 2009-2010, Chen realized his fascination with laser vibrometry and later jumped at the opportunity to work with Buyukozturk as a PhD student on his National Science Foundation (NSF)-supported project for measuring defects in concrete. He recently spoke with CEE about his work:

Q: What are the real world implications of your research?

A: For the past five years, I’ve worked on the challenges of structural health monitoring and non-destructive testing for the condition assessment of infrastructure in the context of two different projects: An NSF- and American Society for Nondestructive Testing-supported project on using laser vibrometry to measure defects in fiberglass reinforced concrete; and most recently a Shell and MITEI sponsored project called BeeView, which employs distributed sensing and motion magnification for detecting damage in structures.

In BeeView, we are attempting to uncover damage in buildings through sensors that measure the structural vibrations of buildings, which we then use to deduce the level of structural damage. My particular focus is using cameras to measure the vibrations of buildings, bridges, and other structures.

When you look at buildings, they’re fixed, stationary. With CSAIL, we’ve been able to use algorithms to observe structural motions from videos. From those small motions, we can get our displacements of these buildings and process them with other damage-detection algorithms.

Our work serves as an early warning for people who maintain these buildings. Theoretically, when we suspect damage, we can pass that information on to those who will repair the buildings and use the lessons learned to construct more sustainable infrastructure in the future.

Over the course of our exploration, we’ve built a model structure in our basement laboratory that we’ve measured with accelerometers and other sensors as a test-bed for our damage detection algorithms.

At the end of this project, we plan to have developed a piece of software, compatible with the camera, that analyzes a structure and determines how it’s changed over time.

Q: What opportunities have you had to delve deeper into your research?

A: In January, I traveled to Houston, Texas, along with other Shell-MITEI fellows, at the invitation of our sponsor, Shell. All of the fellows were invited to visit the testing facilities and explore the current research. It was one of the most memorable experiences during my time in Course 1 [CEE]. I learned about the oil industry — a sector with which I had no previous experience — and how my research with vibration analysis can be used to solve their future challenges.

In the oil business, there is a lot of infrastructure — pipelines, oil rigs, and refineries — which all need to be operational and protected. With the camera, I could measure those facilities and help to maintain and sustain their function.

Memory management scheme could help enable chips

In a modern, multicore chip, every core — or processor — has its own small memory cache, where it stores frequently used data. But the chip also has a larger, shared cache, which all the cores can access.

If one core tries to update data in the shared cache, other cores working on the same data need to know. So the shared cache keeps a directory of which cores have copies of which data.

That directory takes up a significant chunk of memory: In a 64-core chip, it might be 12 percent of the shared cache. And that percentage will only increase with the core count. Envisioned chips with 128, 256, or even 1,000 cores will need a more efficient way of maintaining cache coherence.

At the International Conference on Parallel Architectures and Compilation Techniques in October, MIT researchers unveil the first fundamentally new approach to cache coherence in more than three decades. Whereas with existing techniques, the directory’s memory allotment increases in direct proportion to the number of cores, with the new approach, it increases according to the logarithm of the number of cores.

In a 128-core chip, that means that the new technique would require only one-third as much memory as its predecessor. With Intel set to release a 72-core high-performance chip in the near future, that’s a more than hypothetical advantage. But with a 256-core chip, the space savings rises to 80 percent, and with a 1,000-core chip, 96 percent.

When multiple cores are simply reading data stored at the same location, there’s no problem. Conflicts arise only when one of the cores needs to update the shared data. With a directory system, the chip looks up which cores are working on that data and sends them messages invalidating their locally stored copies of it.

“Directories guarantee that when a write happens, no stale copies of the data exist,” says Xiangyao Yu, an MIT graduate student in electrical engineering and computer science and first author on the new paper. “After this write happens, no read to the previous version should happen. So this write is ordered after all the previous reads in physical-time order.”

Time travel

What Yu and his thesis advisor — Srini Devadas, the Edwin Sibley Webster Professor in MIT’s Department of Electrical Engineering and Computer Science — realized was that the physical-time order of distributed computations doesn’t really matter, so long as their logical-time order is preserved. That is, core A can keep working away on a piece of data that core B has since overwritten, provided that the rest of the system treats core A’s work as having preceded core B’s.

The ingenuity of Yu and Devadas’ approach is in finding a simple and efficient means of enforcing a global logical-time ordering. “What we do is we just assign time stamps to each operation, and we make sure that all the operations follow that time stamp order,” Yu says.

With Yu and Devadas’ system, each core has its own counter, and each data item in memory has an associated counter, too. When a program launches, all the counters are set to zero. When a core reads a piece of data, it takes out a “lease” on it, meaning that it increments the data item’s counter to, say, 10. As long as the core’s internal counter doesn’t exceed 10, its copy of the data is valid. (The particular numbers don’t matter much; what matters is their relative value.)