2018 Yale Healthcare Hackathon

Artificial Intelligence Enabling Medicine


5:30PM (TAC, 300 Cedar St.): Registration opens

6:00PM (TAC, 300 Cedar St.): Keynote & dinner

8:00PM (GPSCY Pub, 204 York St.): Networking Social


9:00AM (Harkness Auditorium, 333 Cedar St.): Breakfast, Hack 101, Problem Pitches

12:00PM (Harkness Ballroom, 367 Cedar St.): Lunch, team formation, and hacking begins!


8:00AM (Harkness Ballroom, 367 Cedar St.): Breakfast & hacking

12:45PM (Harkness Auditorium, 333 Cedar St.): Keynote & Demo

1:30PM (Harkness Auditorium, 333 Cedar St.): Team presentations, judging & prizes!

A healthcare hackathon is an event in which people with diverse perspectives such as clinicians, engineers, designers, software developers, business people, and patients come together in an intense, fun-filled, three-day weekend to develop solutions that could address challenges facing healthcare today. Participants will form teams, collaborate within a limited time frame, and focus on a specific problem or idea in healthcare to come up with innovative ideas and solutions.

Select projects will be given cash awards! Projects from every track are eligible for awards.

This year, Yale School of Medicine's Center for Biomedical Innovation and Technology is partnering with 4Catalyzer to engage the community in a hackathon focused on understanding the ways in which artificial intelligence may enable new practices in medicine. Specifically, there will be four tracks:


Track 1: Designing telemedicine enabled by point-of-care imaging

Model Hardware provided: Butterfly iQ, the first FDA-approved smartphone-connected ultrasound-on-a-chip

When an affordable, Point-of-Care device like the Butterfly iQ becomes ubiquitous, it will be quick and easy to perform many exams on a given patient. However, since exams will range across anatomical areas (cardiac, lungs, abdominal) and over time, this wealth of information presents several challenges that are best described as questions.

First, how can ultrasound data be effectively visualized and presented using a mobile device? Second, how can doctors annotate ultrasound images on small screens for both point-of-care and tele-medical use cases? Third, how can ultrasound images be anatomically registered to the parts of the body being imaged?


1. Create a user-friendly and clinically meaningful interface to absorb the diverse information that the iQ will provide. A relevant dataset of exams will be provided.


2. Design an interface to add and edit annotations of ultrasound images on mobile devices. Annotations include lines, closed and open contours, and tele-sonography directions. Important metrics of this objective include ease of use and accuracy.


3. Train a network to automatically detect the current view in real time, enabling anatomical registration of ultrasound images. This challenge will introduce a dataset of ultrasound views and ask participants to train a network that is fast and accurate.


Track 2: Insights from machine learning

Often, we would like to understand why a neural network is making a particular prediction. For example, a doctor may want to understand why a predicted measurement or diagnosis shows a certain value, or an engineer wishes to debug an incorrect prediction. In this challenge, participants are asked to implement a scheme for neural network interpretation.


Track 3: Optimizing clinical outcomes and experiences

In this track, participants may work on any validated clinical pain points, with the goal of creating solutions that optimize clinical outcomes and patient experiences. Areas such as patient transport, education, communication, scheduling, follow-up care, and improved diagnostics are encouraged. A sample challenge in the realm of diagnostics is displayed below. 

Objective of Challenge 5:

Create a user-friendly and clinically meaningful interface for visualizing medical data collected over time for multiple patients to improve information extraction.  A relevant dataset of exams will be provided. The visualization should allow for information extraction from the serial medical image data.


Track 4: Using AI to diagnose disease in resource-constrained environments, such as space travel

As we prepare to travel to Mars by 2030, there is a need to develop new health technologies to predict, protect, and preserve astronaut health during deep space exploration missions. However, these needs are not limited to space travel: these same technologies can be used to more effectively diagnose and treat diseases in other resource-constrained environments such as the battlefield, rural and remote medical centers, and during natural disasters. The objective of this track is to create impactful innovations to enable these missions.

One objective is to consider how to combine AI with easy-to-use testing. Specifically, there are currently a variety of “artificial intelligence” applications being used to help create differential diagnoses based on symptoms. There is also an emerging but vibrant and promising field of point-of-care (POC) diagnostic tools (including imaging and clinical tests). One impactful innovation is how these two areas might be combined to optimize diagnosis and treatment and to identify additional diagnostic capabilities that can be developed for and used by non-health care providers to further enhance diagnosis.

Expertise and mentoring provided by the Consortia for Improving Medicine with Innovation and Technology (CIMIT) and the Translational Research Institute for Space Health (TRISH) at Baylor College of Medicine.