AI Summer Program
AI Summer Program
The Iribe Initiative for Inclusion and Diversity is pleased to offer the TRAILS AI Summer Academy at the University of Maryland.
About
The TRAILS AI Summer Academy is a two-week long, nonresidential computer programming and artificial intelligence (AI) summer camp at the University of Maryland. Students will come away from the camp knowing how AI can be used to help people and an idea of what kinds of careers there are in AI. Accepted students will have to complete ~25 hours of asynchronous content prior to the start date.
The camp will be offered to rising 10th, 11th, and 12th graders. Students will be exposed to personal growth, education, and hands-on experiences presented by faculty, guest lecturers, and University of Maryland students.
This program intends to create a more inclusive and diverse field of artificial intelligence by targeting and serving underrepresented communities. Students will be given the opportunity to use artificial intelligence to address problems of a probabilistic and numeric nature. Participants will explore the field of AI through team projects, industry field trips, and presentations from guest speakers. There will also be opportunities to engage with faculty, staff and researchers who have been leaders in AI. Students will be exposed to a breadth of knowledge in the field with the goal of leveraging AI for social good.
The program will focus on three aspects:
- AI education and inspiration
- Personal growth
- Hands-on research experience.
Typical Camp Day
- 9:00-12:00pm Classroom Instruction*
12:15-1:15pm Lunch
1:30-5:15pm Classroom Instruction
*Field trips and guest speakers are scheduled during Classroom Instruction time blocks.
Lab Meetings will take place in the Brendan Iribe building.
AI Education
- Formal AI education curriculum instruction by a local AI high school teacher
- Guest Speakers by UMD professors and industry professionals
- In-depth introduction to ongoing research projects from faculty
- Field trip to AI industry leaders where students are introduced to people, topics and career opportunities
Personal Growth
- Discussions lead by experts in career and personal development
- Small group mentoring with AI faculty and graduate students
- Social events with peers
Hands-on Experience
- Small-group research project led by faculty or graduate students; projects focus on using AI for societal good
- Group presentations showcasing work at the end of the program
- Applicant must be able to attend both weeks.
- Applicant must be a rising 10th, 11th, or 12th grader.
- Applicant will be required to submit family and student information.
- Financial Assistance is available for those with displayed need by completing our Scholarship Application.
- Must submit academic transcripts (unofficial transcripts are welcome for application review).
- Emails for teacher recommendations are required
- I4C's AI Summer Program is no longer affiliated with AI4ALL as of 2023. For more information, please visit: https://medium.com/ai4allorg/changes-at-ai4all-a-message-from-ai4alls-ce...
Dates & Links
Program Date and Quick Info
Dates: July 8 - July 19 (25 hours of asynchronous content will be completed prior to the start date)
2-week nonresidential program experience
Target Student: Rising 10th, 11th, and 12th graders (focus on the DC, MD, and VA areas)
2024 I4C Summer Academy applications are closed.
Most recent grade report (transcript or report card): https://go.umd.edu/sum24Grades
Projects
2024 Projects
People are pretty good at getting around. When moving in a crowd, we’ve figured out how to stay close to the people in our group, and move away from those that are not. Our behavior depends on the type of interaction. We can also infer others' objective by implicit interaction with each other through motion. Robots, on the other hand, rely on explicit communication or instructions to avoid collisions or getting stuck while moving towards a goal. The ability to model these interactions can help robots predict the uncertain behavior of pedestrians in the absence of explicit communication. That way a food delivery robot, for example, could autonomously and safely navigate through crowds without human help.
Researchers
Assistant Professor, UMD Department of Mechanical Engineering
Recent years have seen the tremendous successes of machine learning, especially reinforcement learning (RL), where an agent makes sequential decisions by interacting with the environment via trial and error. Prominent and high-profiled examples include AlphaGo, an intelligent Go game-playing agent that has beaten the human champions, autonomous driving, and more recently the training of large-language models such as ChatGPT. Many of the successful stories are concerning the scenarios where there may exist “multiple” decision-makers/learning agents, and they make individual and strategic decisions, with possibly misaligned objectives. An example is autonomous driving, where each agent (self-driving car) has its own goal, while they are coupled with each other by interacting with each other on the road, which may cause congestion if the fleet of cars are not scheduled properly. Thus, it is natural to study the theme of “multi-agent” reinforcement learning, and the behavior of multiple RL agents when they coexist in a common environment. Our goal is to first get familiar with the concept of reinforcement learning and sequential decision-making, and then the concept of multi-agent RL. Further, we aim to develop new multi-agent RL algorithms that may be useful for settings beyond game-playing, e.g., video games and Go games, which mostly focused on “competition” among agents, but can be used to “encourage” the cooperation among them for social good, despite the fact that they may have very different objectives. Along the way, the students will also get familiar with the use of Python, a useful programming language, as well as the basic mathematical concepts in related areas such as optimization, statistics, and game theory.
Researchers
Assistant Professor, UMD Department of Electrical & Computer Engineering
Graduate Student, UMD Department of Computer Science
Researchers
Associate Professor, UMD Department of Computer Science
Graduate Student, UMD Department of Computer Science
Graduate Student, UMD Department of Computer Science
People with disabilities (PwD) are the largest minority group in the world. For PwD, AI carries much promise and can create a more accessible world by (i) facilitating the design of novel technologies (ii) creating personalized technology experiences and (iii) scaling technology deployments. Yet, this promise is undermined by emergent concerns about the risks AI are likely to pose to PwD, such as threats to individual privacy, bias and discrimination. Despite these highlighted benefits and concerns, research about AI’s impact on PwD is still nascent and limited. Human-Computer Interaction (HCI) research is well positioned to address this gap in light of its goal of centering end-users of technologies and communities of people in the design and evaluation of technologies like AI. HCI research is a multidisciplinary field which employs a range of quantitative (e.g. surveys), qualitative (e.g. interviews) and design (e.g. usability testing) research methods to work with such end users. In this HCI research project, we will conduct design workshops that leverage qualitative methods including focus groups and participatory design. The workshops will focus on four themes: (i) visibility: bringing visibility to the workings of AI to allow PwD to recognize when they encounter AI (ii) assets: surfacing the benefits of AI technologies to empower PwD to take advantage of AI (iii) liabilities: uncovering the risks of AI technologies to enable PwD to resist AI harms (iv) design & rights: designing solutions and policies to address highlighted risks and harms. We are working with community organizations in the DMV area that cater to PwD to organize and conduct these workshops. Results from this project will contribute to a more nuanced understanding of the social impacts of AI on marginalized communities.
Researchers
Postdoctoral Fellow at the UMD Values-Centered Artificial Intelligence (VCAI) Initiative
AI has generated tremendous excitement in high-stakes applications, e.g., hiring, lending, etc. that profoundly influence people’s lives. With the growing use of AI in high-stakes decision-making, there is an urgent need to understand how these models make their decisions and which input features play a significant role. For example, if a loan gets denied, one might be interested (and sometimes even entitled by law) in knowing which features were most important in that decision. In this project, students will get to explore explainability techniques across a range of models, ranging from neural networks to large language models for tabular data. Students will get to learn how to apply explainability techniques, such as, SHAP and LIME to visualize the contribution of different features to the overall decision in neural networks. Next, students would experiment with an LLM for classification on the same tabular dataset and understand which features played a significant role and compare results. An important aspect of this project is understanding how different features can substitute each other, often leading to similar performing models. Once the most important features have been identified, an interesting experiment would be to drop the most important features, retrain another model, and then see which features are the most important now. Interestingly, the features that were not at all important initially, may become important now.
Researchers
Assistant Professor, UMD Department of Electrical & Computer Engineering
Graduate Student, UMD Department of Electrical & Computer Engineering