Artificial Intelligence: What's Trust Got To Do With It?

Artificial Intelligence: What's Trust Got To Do With It?

Assistant Professor Danushka Bandara, PhD, working with students to collect brain data using sensors.

Assistant Professor Danushka Bandara, PhD, working with students to collect brain data using sensors.

If good relationships between humans are all about trust, can the same be said about the relationship between humans and machines?

Having trust, or confidence, in a device’s reliability is essential as AI renders computers more capable of acting as intelligent agents. Understanding the role of trust will help manufacturers guide the evolution of artificial intelligence (AI) systems. After all, if consumers don’t trust a device, they are unlikely to buy it.

Assistant professor of computer science and engineering Danushka Bandara, PhD, is on a mission to measure that trust — specifically between people and their artificial intelligence (AI) controlled devices — with the goal of improving it.

“Our AI systems examine data, images, or other input, then make a decision based on the internal model that they have,” said Dr. Bandara. “Humans look at the results and have to decide if they are willing or not to accept that decision. For example, there are systems that can determine if an image is authentic or has been modified. Trust means that I am willing to accept what the AI system tells me about that image.”

Since “trust” is subjective, Dr. Bandara, along with biomedical student Robert Dillon ’24 and electrical engineering student Noor Khattak ’24, have created an experimental design project using biometrics to measure trust in an objective manner. In their experiments, eye trackers, heart rate sensors, galvanic-skin response sensors and body temperature readouts all take note of a subject’s physiology and behavior when interacting with an AI assisted device.

The team uses PsychoPy, Vs Code, NIRx, and Tobii Pro Eye Tracking for their experiments. They collect data from subjects through eye movement, brain activity detection, button clicks, and response time detection. The subjects are also asked a series of questions and shown images in order to test their trust in the AI system.

“The fusion of these sensor data allows me to measure how human-AI trust develops, then eventually propose ways to improve it,” Dr. Bandara said.

The team is also examining whether an AI device seems more or less trustworthy depending on how it presents data. For example, might a driver be more likely to trust a self-driven car if he could view a dashboard panel that shows what the car is detecting, rather than a blank screen? Would a surgeon using an AI assisted device feel more comfortable if she saw data confirming that the device operated with 95.6 percent accuracy?

“If the driver understands why the car is slowing down or changing lanes, he or she is more likely to have confidence in that AI system,” said Dr. Bandara, adding that the results of his research can be applied to many fields, from self-driving cars to virtual assistants such as Alexa or Siri, and to healthcare systems.

Besides working under the guidance of Dr. Bandara to develop and conduct AI experiments, Dillon and Khattak are writing a paper for publication later this year.

Tags:  School of Engineering and Computing

20230206

Recent News

Head Coach Carly Thibault-DuDonis to Offer Insights on NCAA D1 Recruitment, Sept. 24

Read the Article

Egan Holds First Capping Ceremony for Nurse Anesthesia Residents

Read the Article

Cirque Kikasse Takes Outdoor Circus Act to New Heights, Sept. 27 & 28

Read the Article

Fairfield University Expands With New Residence Hall and Downtown Hub

Read the Article

Aomar Boum, PhD, to Explore Jewish Identity in Morocco, Sept. 26

Read the Article

Egan Students Research Stroke Prevention and Compare Healthcare Models Abroad

Read the Article

Engineering Professor Connects Local High School Athletes With Sports Science

Read the Article

Search Results