![]() ![]() The authors make recommendations about three areas of computing: software, algorithms, and hardware architecture. Leiserson co-wrote the paper, published this week, with Research Scientist Neil Thompson, professors Daniel Sanchez and Joel Emer, Adjunct Professor Butler Lampson, and research scientists Bradley Kuszmaul and Tao Schardl. “If we want to harness the full potential of these technologies, we must change our approach to computing.” “But nowadays, being able to make further advances in fields like machine learning, robotics, and virtual reality will require huge amounts of computational power that miniaturization can no longer provide,” says Leiserson, the Edwin Sibley Webster Professor in MIT's Department of Electrical Engineering and Computer Science. The inefficiency that this tendency introduces has been acceptable, because faster computer chips have always been able to pick up the slack. Leiserson says that the performance benefits from miniaturization have been so great that, for decades, programmers have been able to prioritize making code-writing easier rather than making the code itself run faster. In a recent journal article published in Science, a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) identifies three key areas to prioritize to continue to deliver computing speed-ups: better software, new algorithms, and more streamlined hardware. While we wait for the maturation of new computing technologies like quantum, carbon nanotubes, or photonics (which may take a while), other approaches will be needed to get performance as Moore’s Law comes to an end. As a result, over the past decade researchers have been scratching their heads to find other ways to improve performance so that the computer industry can continue to innovate. But today, we’re approaching the limit of how small transistors can get. And for a long time, the smaller the transistors were, the faster they could switch. Transistors, the tiny switches that implement computer microprocessors, are so small that 1,000 of them laid end-to-end are no wider than a human hair. This miniaturization trend has led to silicon chips today that have almost unimaginably small circuitry. For half a century, Moore’s Law has endured: Computers have gotten smaller, faster, cheaper, and more efficient, enabling the rapid worldwide adoption of PCs, smartphones, high-speed internet, and more. ![]() In 1965, Intel co-founder Gordon Moore predicted that the number of transistors that could fit on a computer chip would grow exponentially - and they did, doubling about every two years. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |