The first image of the black hole, which has since been immortalized, was as much a product of a computer stitching together bits of telescope-captured data as it was a proof-of-concept for Einstein’s theory on general relativity. This makes it a feat not just of physics and astronomy, but that of computer science as well.
Led by Katie Bouman, the team’s achievements showed how computers equipped with the right algorithms—instructions on how to process certain kinds of input data—can offer capabilities beyond normal human capacity. In this case, the system was based on machine learning, an area of study steadily gaining traction amid the prevailing artificial intelligence (AI) era.
From learning to intelligence
Machine learning (ML), though related, differs from AI, which is an umbrella term involving building intelligent machines. Dr. Arnulfo Azcarraga from the College of Computer Studies (CCS) explains intelligent machines, which fall under AI, as those which “demonstrate some element of reasoning or [can] ably perform tasks [that are] generally considered difficult.”
ML, meanwhile, refers to a specific set of techniques and applications that can be a means to achieving AI’s goal of developing intelligent machines. According to Azcarraga, ML systems are “designed to improve their performance over time based on data or experience” given to them. The progressive “learning” or improvement aspect is a feature that other AI technologies do not employ in emulating intelligence.
Assistant professor Courtney Ngo, also from CCS, expounds on this difference, “In traditional AI, humans tell the machine what to look at and make decision rules for the machine. In machine learning, humans tell the machine what to look at, but the decision is inferred by the machine.”
What makes ML-based algorithms particularly unique and powerful are the instructions that are not explicitly given to the computer. ML technologies, as Azcarraga describes, “automatically adapt their internal parameters based on the data available…as opposed to being precisely programmed and wired to do a given task”.
The use of ML has grown exponentially, as observed in technologies we use; Ngo cites face recognition and natural language processing as examples. Azcarraga also adds that big companies like Google and Netflix utilize ML to predict user preferences and offer personal recommendations aligned with these interests.
“I’m not saying they are perfect, but once [ML technologies] are adopted, they are usually better than humans,” comments Dr. Laurence Gan Lim from the Mechanical Engineering Department on the increasing incorporation of intelligent systems in various settings.
However, Azcarraga also provides insight regarding the limits of such training, “There is also this all-too-common misconception…that if we feed a machine with sufficiently large amounts of data, we can be sure that the machine can become as intelligent as we want it to be. That is not true.”
Deep levels, wide applications
If humans easily spot patterns in a dozen datasets, a computer, through ML, can easily spot patterns in millions. In one study which used a reinforcement learning method, researchers created an intelligent agent which takes snapshots of its surroundings, reconstructing the environment based on seeing only a small portion of the scene, and guesses the entire 360-degree view based on these glimpses.
Gan Lim offers another example of involving speeding up what would have otherwise been a time-consuming process, “[In] analyzing thousands of images, I don’t think [a human expert] can compete with a computer. So [one] can use a computer that has been trained to…prioritize [and] select ones most likely to require attention.”
Such was the case in a deep learning (DL) system for detecting lung cancer tumors in computed tomography scans, a series of X-ray images put together to show the internal surfaces of tissues. The model, still in its research stages, was already 94 percent accurate in nearly 7000 patient cases.
Being a type of ML, DL models artificial neural networks based on the human brain’s neurons and offers even better execution of tasks.
Meanwhile, OpenAI, a company with a goal of making AI that “benefits all,” delved into a more specialized sub-branch called deep reinforcement learning to develop OpenAI Five, which is intended for learning teamwork and coordination in a video game setting. The intelligent system learned to play Dota 2, an online multiplayer game, by going against its own neural networks thousands of times a day, instead of being coded with prior knowledge about the specific game mechanics. OpenAI Five has since defeated the top professional Dota 2 teams in the world with ease.
“The human should have the foresight to be intelligent enough to predict that things can go wrong, just like in other applications of technology,” Gan Lim emphasizes, talking about the responsibilities humans still hold in these endeavors.
The future of ML may lie where the living blends with non-living components. According to Azcarraga, the frontiers in this discipline are ever expanding into new territories—“where neurobiology meets silicone, and lab-generated electricity [fires] real neurons”.
Machine learning, infiltrating our daily experiences more so than most seem to realize, holds exciting prospects that both experts and hobbyists are only just scratching the surface of. “ML is not coming soon; it is already very much in the air as we speak…I do not see the field reaching a plateau or any dead end in the horizon,” Azcarraga concludes.