BBL Series Test
Join us each Thursday during the fall and spring semesters as we present interesting speakers on topics ranging from current areas of interest in HCI, software demos/reviews, study design, proposed research topics and more. The BBL is the one hour a week where we all come together and provide HCIL members the opportunity to build collaborations, increase awareness of each other’s activities, and generally just have a bit of fun together.
If you would like to give (or suggest) a future BBL talk, send email to HCIL Director Jessica Vitak (jvitak@umd.edu) with your proposed talk title, a brief abstract, and your bio.
Talks are held in the HCIL (HBK2105), but if you can’t make it in person, register for Zoom here.
Fall 2024 Upcoming Events
-
BBL Speaker Series: Welcome Back Event!
Date: Aug 29th, 2024 12:30 PM
Talk Title: Welcome Back Event!
Location: HBK-2105 only Join us into welcoming everyone back to the HCIL for the 2024 fall semester. Come chat with friends and enjoy some pizza! Note: this event will only be in person.BBL Speaker Series: CHI Writing and Reflecting
Date: Sep 5th, 2024 12:30 PM
Talk Title: CHI Writing and Reflecting
Location: HBK 2105
Description: With the CHI deadline looming, we’ll use this week’s brown bag time slot for folks to take a break from writing to relax (a little), enjoy some pizza with colleagues, and get ready for the final push. So if you’re on campus, stop by HBK2105 to get a slice and chat with other HCIL members.BBL Speaker Series: CHI Deadline
Date: Sep 12th, 2024 12:30 PM
Location: HBK 2105
It's the CHI deadline! As many of our members will be putting finishing touches on their CHI 2025 submissions, we won't have a speaker today. Instead, stop by the lab (HBK-2105) to take a breather, grab a snack, and chat with your HCIL colleagues.BBL Speaker Series: Cooperative Inquiry: When Children and Adults Design Together
Date: Sep 19th, 2024 12:30 PM
Talk Title: Cooperative Inquiry: When Children and Adults Design Together
Speaker: beth bonsignore, Associate Research Professor; Director, BA in Tech & Info Design; Director, KidsTeam
Location: HBK 2105 and Zoom
Abstract: The goal of Participatory Design is to include as many people (users) as possible in all stages of the technology design process. Initially, it was unclear whether children could be actively involved in participatory design in any role beyond "end user" or "tester." In 1998, KidsTeam was launched at UMD’s Human-Computer Interaction Lab to explore practical and ethical questions about co-design between children and adults. This research resulted in Cooperative Inquiry, a design-based research approach that is now in use internationally across academia and industry. Its participatory design practices and techniques have been incorporated into HCI curricula and integrated into design-based research in the Learning Sciences, with impacts on industry practice. KidsTeam has also expanded its reach, demonstrating its replicability, utility, and generalizability as similar intergenerational co-design capabilities have been created in university/K-12, not-for-profit, and industry settings. More recently, new horizons for intergenerational co-design have opened up. For example, the Cooperative Inquiry design framework has become foundational in emerging critical design and computational empowerment programs. This raises interesting research questions about the role of youth in these new efforts. In this talk, Beth will provide a brief overview of KidsTeam at UMD: how it started, how it's going, and how it might best meet these new challenges.
Bio: Elizabeth (“beth”) Bonsignore is an associate research professor at UMD’s College of Information and Human-Computer Interaction Lab (HCIL). Her research explores the design of interactive play and social experiences that promote new media literacies and arts-integrated science learning. She co-designs and advocates with youth, families, and local communities with the goal of empowering youth historically underrepresented in STEM to advance in these fields. Her recent collaborations with amazing graduate students have explored the challenges (and conundrum) of making participatory design as inclusive as possible through assets-based design and funds of identity.BBL Speaker Series: Student Lightning Talks
Date: Sep 26th, 2024 12:30 PM
Talk Title: Student Lightning Talks
Location: HBK 2105 and Zoom
Description:
This BBL will be dedicated to four student lightning talks. We are excited to hear what they are working on!
How do lightning talks work?
Typically, people give a 4-5 minute "presentation" -- this can be very informal or involve slides. The presentation gives some background on your project and then introduces a specific question or "ask" that you want feedback on. Then we have ~15 minutes of conversation with attendees about your question/topic. This is a great opportunity for students to get feedback on research ideas or projects in various stages.BBL Speaker Series: From Haptic Illusions to Beyond Real Interactions in Virtual Reality
Date: Oct 3rd, 2024 12:30 PM
Talk Title: From Haptic Illusions to Beyond Real Interactions in Virtual Reality Speaker: Parastoo Abtahi, Assistant Professor of Computer Science, Princeton University
Location: HBK 2105 and Zoom
Watch Here
Abstract: Advances in audiovisual rendering have led to the commercialization of virtual reality (VR) hardware; however, haptic technology has not kept up with these advances. While haptic devices aim to bridge this gap by simulating the sensation of touch, many hardware limitations make realistic touch interactions in VR challenging. In my research, I explore how by understanding human perception, we can design VR interactions that not only overcome the current limitations of VR hardware but also extend our abilities beyond what is possible in the real world. In this talk, I will present my work on redirection illusions that leverage the limits of human perception to improve the perceived performance of encountered-type haptic devices, such as improving the position accuracy of drones, the speed of tabletop robots, and the resolution of shape displays when used for haptics in VR. I will then present a framework I have developed through the lens of sensorimotor control theory to argue for the exploration and evaluation of VR interactions that go beyond mimicking reality.
Bio: Parastoo Abtahi is an Assistant Professor of Computer Science at Princeton University, where she leads Princeton’s Situated Interactions Lab (Ψ Lab) as part of the Princeton HCI Group. Before joining Princeton, Parastoo was a visiting research scientist at Meta Reality Labs Research. She received her PhD in Computer Science from Stanford University, working with Prof. James Landay and Prof. Sean Follmer. Her research area is human-computer interaction, and she works broadly on augmented reality and spatial computing. Parastoo received her bachelor’s degree in Electrical and Computer Engineering from the University of Toronto, as part of the Engineering Science program.BBL Speaker Series: Social Media’s Midlife Crisis? How Public Discourse Imagines Platform Futures
Date: Oct 10th, 2024 12:30 PM
Talk Title: Social Media’s Midlife Crisis? How Public Discourse Imagines Platform Futures
Speaker: Chelsea Butkowski (left), American University & Frances Corry (right), University of Pittsburgh
Location: HBK 2105 and Zoom
Watch Here
Abstract: Though the social media ecosystem has never been stable—with platforms constantly emerging, evolving, aging, and closing—the last few years have appeared particularly volatile. Major companies like Meta and X have undergone historic transformations, and a slew of new platforms have also emerged, including TikTok, BeReal, Threads, Bluesky, Mastodon, and others. It appears as if social media companies, the platforms they run, and the users they support, have arrived at an existential juncture. What is social media for in today’s society––and what does its future look like? Decades on, is “new media” still “new” after all? In this talk, Drs. Chelsea Butkowski and Frances Corry draw on their recent research analyzing press coverage of emerging platforms to argue that contemporary social media discourse has become fueled by cultural memory, a phenomenon that they call “nostalgic anticipation.” In other words, speculation about social media's volatile future is persistently filtered through a yearning for its past. Butkowski and Corry will discuss how this unique juncture for social media can contribute to reframing understandings of platforms in our scholarship and our everyday lives.
Bio:
Chelsea Butkowski Chelsea Butkowski is an Assistant Professor of Communication at American University. Their research examines the relationship between media technologies and identity, including the social practices and effects of everyday social media use. Butkowski's recent work focuses on digital identity during periods of sociotechnical transition and disruption.
Frances Corry Frances Corry is an Assistant Professor in the Department of Information Culture & Data Stewardship at the University of Pittsburgh. Her research and teaching focus on the prehistories and afterlives of data-intensive systems – from social media platforms to AI tools. Corry’s book project examines the process of social media platform closure and content deletion to ask about the future of cultural memory.BBL Speaker Series: Scaling Expertise via Language Models with Applications to Education
Date: Oct 17th, 2024 12:30 PM
Talk Title:Scaling Expertise via Language Models with Applications to Education
Speaker: Rose Wang, Computer Science PhD candidate, Stanford University
Location: HBK 2105 and Zoom
Watch Here
Abstract: Access to expert knowledge is essential for fostering high-quality practices across domains like education. However, many novices—such as new teachers—lack expert guidance, limiting their growth and undermining student outcomes. While language models (LMs) hold potential for scaling expertise, current methods focus on surface patterns rather than capturing latent expert reasoning. In this talk, I'll discuss how my research addresses this by (1) identifying problematic practices for intervention from noisy, large-scale interaction data, (2) developing benchmarks that measure expert quality of practices, and (3) extracting latent expert reasoning to adapt LMs for real-time educational interventions. I'll highlight how my methods have been deployed to improve K-12 education at scale, positively impacting millions of live interactions between students and educators.
Bio: Rose E. Wang is a Computer Science PhD candidate at Stanford University. She develops machine learning and natural language processing methods to tackle challenges in real-world interactions, with a focus on Education. Her work directly improves the education of under-served students through partnerships she has cultivated during her Ph.D., including Title I school districts and several education companies, impacting 200,000+ students, 1,700+ teachers, 16,100+ tutors, in millions of tutoring sessions across the U.S., UK and India. Her work is recognized by NSF Graduate Research Fellowship, CogSci Best Paper Award, NeurIPS Cooperative AI Best Paper Award, ICLR Oral, Rising Star in Data Science, Building Educational Applications Ambassador Paper Award, and the Learning Engineering Tools Competition Award.BBL Speaker Series: Designs to Support Better Visual Data Communication
Date: Oct 24th, 2024 12:30 PM
Talk Title: Designs to Support Better Visual Data Communication
Speaker: Cindy Xiong, Assistant Professor, School of Interactive Computing, Georgia Institute of Technology
Location: HBK 2105 and Zoom
Watch Here
Abstract: Well-chosen data visualizations can lead to powerful and intuitive processing by a viewer, both for visual analytics and data storytelling. When badly chosen, visualizations leave important patterns opaque or misunderstood. So how can we design an effective visualization? I will share several empirical studies demonstrating that visualization design can influence viewer perception and interpretation of data, referencing methods and insights from cognitive psychology. I leverage these study results to design natural language interfaces that recommend the most effective visualization to answer user queries and help them extract the ‘right’ message from data. I then identify two challenges in developing such an interface. First, human perception and interpretation of visualizations is riddled with biases, so we need to understand how people extract information from data. Second, natural language queries describing takeaways from visualizations can be ambiguous and thus difficult to interpret and model, so we need to investigate how people use natural language to describe a specific message. I will discuss ongoing and future efforts to address these challenges, providing concrete guidelines for visualization tools that help people more effectively explore and communicate data.
Bio: Cindy Xiong Bearfield is an Assistant Professor in the School of Interactive Computing at Georgia Institute of Technology. Bridging the fields of psychology and data visualization, Professor Bearfield aims to understand the cognitive and perceptual processes that underlie visual data interpretation and communication. Her research informs the design and development of visualizations and visualization tools that elicit calibrated trust in complex data to facilitate more effective visual data analysis and communication. She received her Ph.D. in Cognitive Psychology and her MS in Statistics from Northwestern University. Her research at the intersection of human perception, cognition, and data visualization has been recognized with an NSF CAREER award. She has received paper awards at premier psychology and data visualization venues, including ACM CHI, IEEE PacificVis, Psychonomics, and IEEE VIS. She is also one of the founding leaders of VISxVISION (visxvision.com), an initiative dedicated to increasing collaboration between visualization researchers and perceptual + cognitive psychologists.BBL: Speaker Series: A New Model for News Engagement Depends on Human-Computer Interaction
Date: Oct 31st, 2024 12:30 PM
Talk Title: A New Model for News Engagement Depends on Human-Computer Interaction
Speaker: Dr. Ronald Yaros, Associate Professor, UMD’s Digital Engagement Lab (.org) Philip Merrill College of Journalism
Location: HBK 2105 and Zoom
Watch Here
Abstract: Despite the enduring importance of quality writing, reporting, and sourcing in local journalism, digital communicators have yet to fully leverage cutting-edge research from other disciplines to meet the evolving needs of today’s news consumers. Since 2005, Yaros has combined journalism with concepts from cognitive psychology, educational psychology, and human-computer interaction to develop a new model for digital engagement. The unique model combines ten user and content variables and is incorporated into a “smart story suite” so users can select their news narrative. The interface builds what Yaros calls "attention momentum" without depending on clickbait, text-heavy pages, and video. As news consumption and advertising revenues continue to decline, the model seeks to increase the probability that more users will spend more time with more news. Yaros looks forward to presenting this applied research and welcomes collaboration with his team in the digital engagement lab.
Bio: Dr. Yaros is an Associate Professor in the Philip Merrill College of Journalism and an Affiliate Associate Professor in the College of Information Science. He earned his Ph.D. at the University of Wisconsin-Madison. He then taught at the University of Utah from 2005-2008 where he completed eye-tracking research of his early model before joining Maryland in 2008. Yaros is also a Tow-Knight Disruptive Educator for Journalism Innovation and Entrepreneurship, an Apple Distinguished Educator, and recipient of one of the first campus-wide Donna B. Hamilton Excellence in Undergraduate Teaching Awards.BBL Speaker Series: Understand, Predict, and Enhance User Behavior in Mixed Reality
Date: Nov 7th, 2024 12:30 PM
Talk Title:Understand, Predict, and Enhance User Behavior in Mixed Reality
Speaker: Yukang Yan, Assistant Professor, Department of Computer Science, University of Rochester
Location: HBK 2105 and Zoom
Watch Here
Abstract: My research focuses on the enhancements in human computer interaction in Mixed Reality. As the integration of digital and physical worlds through Mixed Reality expands the interaction space beyond traditional screens, it has a significant impact on how users perceive and interact with the world. Through user studies, I observe and model the behavioral and perceptual patterns of users as they interact with Mixed Reality. Based on the findings, I design and develop interaction techniques that are tailored to these behavioral changes in order to facilitate user input and information display. Additionally, I explore augmentation methods that allow users to surpass their capabilities in the real world, such as embodying healthier virtual avatars or non-humanoid avatars to gain unique experiences not possible in reality.
Bio: I'm an Assistant Professor at the Department of Computer Science, University of Rochester. I serve as co-director of ROCHCI Group and lead BEAR Lab. I'm part of the AR/VR Initiative at University of Rochester as a participating faculty. Prior to this, I worked as a postdoc in Augmented Perception Lab at Carnegie Mellon University. I got my Ph.D. degree from Tsinghua University. My research is focused in the intersection area of Human-Computer Interaction and Mixed Reality. I publish at ACM CHI, UIST, IMWUT, and IEEE VR, with two Best Paper Honorable Mention Awards from CHI 20 and 23, and one Best Paper Nominee Award from VR 23. I served as CHI 23 Late-Breaking Work Co-Chair, UIST 24 Registration Co-Chair.BBL Speaker Series: Visualizing the Unseen: Perceptographer – A Pioneering AI Paradigm for Brain-Computer Interaction
Date: Nov 14th, 2024 12:30 PM
Talk Title: Visualizing the Unseen: Perceptographer – A Pioneering AI Paradigm for Brain-Computer Interaction
Speaker: Elia Shahbazi
Location: HBK 2105 and Zoom
Abstract: Understanding the complexities of human perception is a fundamental challenge in neuroscience. We have recently developed an innovative approach called Perceptography to visualize intricate perceptual distortions resulting from localized brain stimulation in the inferotemporal (IT) cortex. Perceptography leverages machine learning to create and refine specific image distortions that are challenging for animals to distinguish from the effects of cortical stimulation. In this talk, I will present Perceptographer, a groundbreaking, customizable framework for visualizing brain-stimulation-induced perceptual events across various regions of the visual cortex. By overcoming the limitations of existing image generation models in handling complex distortions, Perceptographer opens new pathways for exploring and understanding the intricate phenomena of brain induced perception.
Bio: Elia Shahbazi is a trailblazing computational neuroscientist whose diverse expertise spans applied and pure mathematics, software engineering, artificial intelligence, and entrepreneurial leadership. In 2018, Elia joined the NIH as a Research Scientist Fellow in the Unit of Neuron, Behavior, and Circuits. As a computational neuroscientist, he has been at the forefront of merging AI with neuroscience and bio-related sciences.
BBL Speaker Series: Intent-AI Interaction: Elevating Human-Computer Interaction to the Intent and Conceptual Level
Date: Nov 21st, 2024 12:30 PM
Talk Title: Intent-AI Interaction: Elevating Human-Computer Interaction to the Intent and Conceptual Level
Speaker: Jason Ding
Location: HBK 2105 and Zoom
Watch Here
Abstract: Technological advancements are continually reshaping human-computer interaction (HCI). Although direct manipulation methods, such as clicking and dragging icons in graphical user interfaces (GUIs), remain widespread, generative AI now has the ability to understand user interfaces and autonomously perform tasks. This reduces the reliance on direct user manipulation and prompts a reimagining of HCI paradigm. In this talk, we introduce "intent-AI interaction" as a forward-looking paradigm where interactions are driven by the user's intent and conceptual reasoning rather than command-level actions. We will demonstrate this paradigm shift through three studies: human-AI co-creation of news headlines, ideation enabled by cross-domain analogies, and data exploration.
Bio: Zijian "Jason" Ding is a 4th-year PhD candidate at the University of Maryland's Human-Computer Interaction Lab. His research focuses on intent-AI interaction as a new paradigm in human-computer interaction. His work has been published in top-tier AI and HCI conferences, including EMNLP, CHI, CSCW, and UIST, with recognition such as a best paper honorable mention from ACM Creativity & Cognition. Ding's industry experience includes internships at Microsoft Research, the MIT-IBM Watson AI Lab (IBM Research), and Dataminr, where his work led to publications, first-authored patents, and real-world products.BBL Speaker Series: Community-Based Approaches to Building Peer Support Systems for Work
Date: Dec 5th, 2024 12:30 PM
Talk Title: Community-Based Approaches to Building Peer Support Systems for Work
Speaker: Yasmine Kotturi, Assistant Professor of Human-Centered Computing, Information Systems, University of Maryland, Baltimore County
Location: HBK 2105 and Zoom
Watch Here
Abstract: The “future of work” promises innovation and opportunity, yet for many, it manifests as uncertainty and instability—exposing a stark divide between optimistic predictions and lived realities. In this talk, I explore the critical role of peer networks in addressing worker challenges such as isolation and skill development in digitally-mediated work. Drawing on community-based, participatory design methods, I present three peer support systems—Hirepeer, Peerdea, and Tech Help Desk—that tackle these issues by fostering trust and accountability within worker communities. These systems demonstrate how localized, community-based approaches can overcome the limitations of current approaches to building sociotechnical systems which prioritize scale over relationship building. Finally, my work highlights the importance of constructive community-academic partnerships in computing which kickstart and sustain community initiatives.
Bio: Dr. Yasmine Kotturi is an Assistant Professor of Human-Centered Computing at the University of Maryland, Baltimore County in the Information Systems Department. Her research focuses on digitally-mediated employment and entrepreneurship, examining how distributed workers leverage peer networks to navigate precarity and advance their careers. Dr. Kotturi has been recognized as an EECS Rising Star, WAIM Fellow (Work in the Age of Intelligent Machines), and Siebel Scholar. She has collaborated with nonprofits, as well as leading companies including Instagram and Etsy. Dr. Kotturi earned her Ph.D. in Human-Computer Interaction from Carnegie Mellon University and has held positions at leading research institutions such as Microsoft Research Asia and MIT’s Teaching Systems Lab. Learn more about Dr. Kotturi’s work: ykotturi.github.io and @yasminekotturi.BBL Speaker Series: “Learning to Code with AI”
Date: Jan 30th, 2025 12:30 PM
Talk Title: Learning to Code with AI
Speaker: Majeed Kazemi, PhD candidate at University of Toronto
Location: HBK 2105 and Zoom
Abstract:"In the evolving landscape of programming with generative AI, critical questions emerge around its impact on cognition, interaction, and learning. In this talk, I will present findings from my research on three key topics: (a) What are the implications of using AI when learning to code for the first time? Does AI enhance learning or foster over-reliance, potentially hindering outcomes? (b) How can we design novel interfaces that cognitively engage learners with AI-generated solutions—enhancing users’ ability to extend and modify code without creating friction? (c) How to design pedagogical AI coding assistants for educational contexts? I will discuss the design of CodeAid, results from its 12-week deployment in a large class of 750 students, and perspectives from students and educators."
Bio: "Majeed is a PhD candidate in Computer Science at the University of Toronto, advised by Prof. Tovi Grossman. His research in Human-Computer Interaction liest at the intersection of programming, education, and AI. As a systems researcher, his work draws from learning sciences and interaction design to develop novel tools that address fundamental challenges surrounding interaction and cognition when integrating AI into programming. His work has been published at top-tier HCI venues such as CHI, UIST, IDC, and IUI, and his research in AI and education is among the most highly cited CHI papers of the past two years. Prior to his PhD, Majeed completed his PhD at the University of Maryland, where he worked with Prof. Jon Froehlich at the HCIL. During this time, he designed and built MakerWear–a tangible, modular electronic toolkit that enables young children to create interactive wearables–which earned a Best Paper Award at CHI."BBL Speaker Series: “Making Data Strange in Nonprofit Organizations”
Date: Feb 6th, 2025 12:30 PM
Talk Title: Making Data Strange in Nonprofit Organizations
Speaker: Dr. Amy Voida, Associate professor and founding faculty in the Department of Information Science, University of Colorado Boulder
Location: HBK 2105 and Zoom
Abstract: "This is a talk with an alter ego. As a research talk, I explore the myriad ways in which the use of data in nonprofit organizations disrupts our expectations of what it means to design organizational information systems — defamiliarizing data or… making data strange. From needing to address the coerciveness of the nonprofit database’s primary key to requiring new approaches for identifying the manipulative uses of data by ideologically polarized nonprofits, research about this sector serves as a critical case study of information systems at a state of enormous precarity and politicization. The research talk’s alter ego is a teaching talk in which I introduce defamiliarization, a construct that transcends subdisciplines and extends from one end of the design process to the other. Despite this impressive resume, defamiliarization is rarely taught in our curriculum, so I also take this opportunity to share seven strategies for using defamiliarization in your own work. I conclude by offering a glimpse of a new course I have designed to put defamiliarization center stage."
Bio: "Dr. Amy Voida is an associate professor and founding faculty in the Department of Information Science at the University of Colorado Boulder. She conducts empirical and design research in human–computer interaction and computer supported cooperative work, with a focus on philanthropic informatics—an interdisciplinary domain she pioneered exploring the role of information and communication technologies in supporting nonprofit and other work for the public good. Dr. Voida earned her Ph.D. in Human–Centered Computing from the Georgia Institute of Technology. She also holds an M.S. in Human–Computer Interaction from Georgia Tech and a B.A.E. in Elementary Education from Arizona State University. "BBL Speaker Series: “Steps Towards an Infrastructure for Scholarly Synthesis”
Date: Feb 13th, 2025 12:30 PM
Talk Title: Steps Towards an Infrastructure for Scholarly Synthesis
Speaker: Dr. Joel Chan, Assistant Professor; Assistant Director, PhD Information Studies; Associate Director, HCIL
Location: HBK 2105 and Zoom
Abstract: Sharing, reusing, and synthesizing knowledge is central to research progress. But these core functions are not well-supported by our formal scholarly publishing infrastructure: documents aren't really the right unit of analysis, so researchers resort to laborious "hacks" and workarounds to "mine" publications for what they need. Information scientists have proposed an alternative infrastructure based on the more appropriately granular model of a discourse graph of claims, and evidence, along with key rhetorical relationships between them. However, despite significant technical progress on standards and platforms, the predominant infrastructure remains stubbornly document-based. What can HCI do about this? Drawing from infrastructure studies, I diagnose a critical infrastructural bottleneck that HCI can help with: the lack of local systems that integrate discourse-centric models to augment synthesis work, from which an infrastructure for synthesis can be grown. In this talk, I'll describe what we can and should build in order to grow a discourse-centric synthesis infrastructure. Drawing on 3 years of research through design and field deployment in a distributed community of hypertext notebook users, I'll sketch out a design vision of a thriving ecosystem of researchers authoring local, shareable discourse graphs to improve synthesis work, enhance primary research and research training, and augment collaborative research. I'll discuss how this design vision -- and our empirical work -- contributes steps towards a new infrastructure for synthesis, and increases HCI's capacity to advance collective intelligence and solve infrastructure-level problems.
Bio: Dr. Chan’s research and teaching explore systems that support creative knowledge work. He conceives of “systems” very broadly, from individual cognitive skills, interfaces, tools and practices, to collaborative and organizational dynamics and tools, collective intelligence and crowdsourcing, social computing, all the way to sociotechnical infrastructures within which knowledge work is done. Dr. Chan is also broadly interested in creative work across many domains, although he spends most of his time considering the disciplines of design and scientific discovery. His long-term vision is to help create a future where any person or community can design the future(s) they want to live in. Before coming to the College of Information Studies, Dr. Chan was a Postdoctoral Research Fellow and Project Scientist in the Human-Computer Interaction Institute (HCII) at Carnegie Mellon University. Dr. Chan received his Ph.D. in Cognitive Psychology at the University of Pittsburgh.BBL Speaker Series: “HCAI Research in Industry”
Date: Feb 20th, 2025 12:30 PM
Talk Title: HCAI Research in Industry
Speaker: Dr. Tiffany D. Do, Assistant Professor, Drexel University
Location: HBK 2105 and Zoom
Abstract: Tiffany Do, an Assistant Professor specializing in human-centered AI, will provide an in-depth overview of industry research, drawing on her experiences at Microsoft Research and Google Labs. This talk will explore the key distinctions between industry and academic research, offering students a comprehensive understanding of the objectives, methodologies, and opportunities unique to industry research. Attendees will gain practical insights into navigating and excelling in research careers beyond academia.
Bio: Dr. Tiffany D. Do is an Assistant Professor in Computer Science at Drexel University, specializing in Human-Centered AI, augmented reality (AR), virtual reality (VR), and virtual avatars. Her research focuses on the potential of AI to personalize experiences for individuals, placing a premium on their unique identities and perspectives. Previously, she conducted research at Microsoft Research and Google, where she focused on user experience (UX) and interactions with AI language applications, particularly large language models (LLMs) and virtual agents.BBL Speaker Series: Engineering Bodies and Subjectivity
Date: Feb 27th, 2025 12:30 PM
Talk Title: Engineering Bodies and Subjectivity
Speaker: Jun Nishida, Assistant Professor, Department of Computer Science and Immersive Media Design program, University of Maryland, College Park
Location: HBK 2105 and Zoom
Abstract: While today’s tools allow us to communicate effectively with others via video and text, they leave out other critical communication channels, such as physical skills and embodied knowledge. These bodily cues are important not only for face-to-face communication but even when communicating motor skills, subjective feelings, and emotions. Unfortunately, the current paradigm of communication is rooted only in symbolic and graphical communication, leaving no space to add these additional haptic and/or somatosensory modalities. This is precisely the research question I tackle: how can we also communicate our physical experience across users? In this talk, I introduce how I have engineered wearable devices that allow for sharing physical experiences across users, such as between a physician and a patient, including people with neuromuscular impairments and even children. These custom-built user interfaces include exoskeletons, virtual reality systems, and interactive devices based on electrical muscle stimulation. I then investigated how we can extend this concept to support interactive activities, such as product design, through the communication of one's bodily cues. Lastly, I discuss how we can optimize our subjectivity using the psychophysics approach, such as a sense of agency, when our bodies are modified, actuated, or shared with a computer or a human partner. I conclude my talk by discussing how we can further explore the possibilities enabled by a user interface that communicates more than audio-visual cues and the roadmap for using this approach in new territories, such as understanding how our bodies, perceptions, and somatic interactions contribute to the formation of human embodiment, subjectivity, and behavior.
Bio: Jun Nishida is an Assistant Professor in the Department of Computer Science and Immersive Media Design program at the University of Maryland, College Park, where he leads Embodied Dynamics Laboratory (https://emd.cs.umd.edu/). Previously he was a postdoctoral fellow at the University of Chicago, advised by Prof. Pedro Lopes. He received his Ph.D. in Human Informatics from the University of Tsukuba, Japan in 2019. His research interests focus on developing interaction techniques and wearable interfaces where users can communicate their embodied experiences to support each other by means of wearable and human augmentation technologies, with applications in the fields of rehabilitation, education, and design. He has received ACM UIST Best Paper Award, ACM CHI Best Paper Honorable Mention Award, Microsoft Research Asia Fellowship Award, and Forbes 30 Under 30 Award among others.BBL Speaker Series: Safe(r) Digital Intimacy: Lessons for Internet Governance & Digital Safety
Date: Mar 6th, 2025 12:30 PM
Talk Title: Safe(r) Digital Intimacy: Lessons for Internet Governance & Digital Safety
Speaker: Dr. Elissa M. Redmiles, Clare Luce Boothe Assistant Professor, Computer Science Department, Georgetown University; Faculty Associate, Berkman Klein Center for Internet & Society, Harvard University.
Location: HBK 2105 and Zoom
Abstract: The creators of sexual content face a constellation of unique online risks. In this talk I will review findings from over half a decade of research I’ve conducted in Europe and the US on the use cases, threat models, and protections needed for intimate content and interactions. We will start by discussing what motivates for the consensual sharing of intimate content in recreation ("sexting") and labor (particularly on OnlyFans, a platform focused on commercial sharing of intimate content). We will then turn to the threat of image-based sexual abuse, a form of sexual violence that encompasses the non-consensual creation and/or sharing of intimate content. We will discuss two forms of image-based sexual abuse: the non-consensual distribution of intimate content that was originally shared consensually and the rising use of AI to create intimate content without people’s consent. The talk will conclude with a discussion of how these issues inform broader conversations around internet governance, digital discrimination, and safety-by-design for marginalized and vulnerable groups.
Bio: Dr. Elissa M. Redmiles is the Clare Luce Boothe Assistant Professor at Georgetown University in the Computer Science Department and a Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard University. She was previously a faculty member at the Max Planck Institute for Software Systems and has additionally served as a consultant and researcher at multiple institutions, including Microsoft Research, the Fred Hutchinson Cancer Center, Meta, the World Bank, the Center for Democracy and Technology, and the Partnership on AI. Dr. Redmiles uses computational, economic, and social science methods to understand users’ security, privacy, and online safety-related decision-making processes. She particularly focuses on designing systems that improve safety & equity for members of marginalized communities. Dr. Redmiles has presented her research at the White House, European Commission, and the National Academies and her work has been featured in venues such as the New York Times, Wall Street Journal, Scientific American, Rolling Stone, Wired, and Forbes. She is the recipient of the 2024 ACM SIGSAC Early Career Award for exceptional contributions to the field of computer security and privacy and her research has received multiple paper recognitions at USENIX Security, ACM CCS, ACM CHI, ACM CSCW, and ACM EAAMO. She received her B.S., M.S., and Ph.D., all from the University of Maryland. Go Terps!BBL Speaker Series: Trans Technologies
Date: Mar 13th, 2025 12:30 PM
Talk Title: Trans Technologies
Speaker: Oliver Haimson, Assistant Professor, University of Michigan School of Information
Location: HBK 2105 and Zoom
Abstract: In this talk, drawing from my new book Trans Technologies (MIT Press, 2025), I discuss how technology creates new possibilities for transgender people, and how trans experiences, in turn, create new possibilities for technology. Mainstream technologies often exclude or marginalize transgender users, but when trans creators take technology design into their own hands, transformative possibilities emerge. Through in-depth interviews with over 100 creators of trans technology—including apps, games, health resources, extended reality systems, and supplies designed to address challenges trans people face—I uncover what trans technology means and explore its possibilities, limitations, and future prospects. I examine the design processes that brought these technologies to life, the role of community in their creation, and how they empower trans individuals to create their own tools to navigate a world that often fails to meet trans needs. This work highlights the successes and limitations of current trans technologies, identifies gaps still to be addressed, and investigates how privilege, race, and access to resources shape which trans technologies are created, who benefits, and who may be left out. Finally, I chart new directions for design and innovation to drive meaningful social change, inviting us to rethink the relationship between technology and marginalized communities.
Bio: Oliver Haimson is an Assistant Professor at University of Michigan School of Information, author of Trans Technologies (MIT Press 2025), and a recipient of a National Science Foundation CAREER award. He conducts social computing research focused on envisioning and designing trans technologies, social media content moderation and marginalized populations, and changing identities online during life transitions.BBL Speaker Series: Reading, Augmented
Date: Mar 27th, 2025 12:30 PM
Talk Title:Reading, Augmented
Speaker: Andrew Head, Assistant Professor, Computer Science, University of Pennsylvania
Location: HBK 2105 and Zoom
Abstract: Have you ever read a text and failed to get much out of it? Why did that happen? There is a good chance it is because you came to a text with different context than the author expected. In this talk, I offer a vision of texts where they are always augmented to provide the necessary context. These texts explain their complex jargon. They simplify their own dense passages. They provide indexes into their best passages. And they enliven the stuffiest notations. Then, I show this vision is close to reality. It is based on a series of novel interfaces my lab and collaborators have developed. Lab studies of these interfaces have shown they improve information acquisition and change the way readers navigate texts. Their design has even influenced production reading applications. Come to this talk to examine a most common intellectual activity—reading—from a new viewpoint.
Bio: Andrew Head is an assistant professor in computer science at the University of Pennsylvania. He is co-founder and co-lead of the Penn HCI research group in human-computer interaction. His group develops novel technologies for interactive reading and reasoning. He publishes in ACM CHI, UIST, and other top venues for HCI research. To learn more about his group’s work, see his website: https://andrewhead.info.BBL Speaker Series: Threat Modeling Reproductive Health Privacy
Date: Apr 3rd, 2025 12:30 PM
Talk Title: Threat Modeling Reproductive Health Privacy
Speaker: Dr. Nora McDonald, Assistant Professor, Department of Information Science and Technology, George Mason University
Location: HBK 2105 and Zoom
Abstract: In a post-Roe landscape, reproductive privacy has become increasingly complex and high-stakes. This talk draws on mixed-methods research with healthcare providers and people who can become pregnant to examine how both groups understand and respond to evolving privacy risks. My work with colleagues found that providers’ privacy threat models often overlook new legal, digital, and contextual risks. While many are thinking critically about patient safety, their models need updating. Meanwhile, patients—deeply aware of their risks—are increasingly taking extreme privacy measures but still rely on guidance from providers. I conclude by proposing a concept that I have been evolving over the years, privacy intermediaries, as a promising framework to support people navigating these urgent, evolving threats.
Bio: Dr. Nora McDonald is an Assistant Professor in the Department of Information Science and Technology at George Mason University. She holds a PhD in Information Science from Drexel University's College of Computing and Informatics, where she focused on digital privacy and vulnerability. Her research examines the development of safe and ethical technologies, focusing on the impacts of complex surveillance systems and legal ecosystems, as well as the emerging relationships between identities, shifting norms around privacy and surveillance, and the data collected by privacy-invasive social media algorithms. This work bridges studies on reproductive privacy, teens' privacy in relation to these algorithms, and broader privacy concerns in the digital age. Positioned at the intersection of HCI, social computing, and critical computing, her work is published in leading venues such as CHI, CSCW, TOCHI, PETS, and USENIX.BBL Speaker Series: Advancing Digital Health: AI-Driven Interventions for Patient Care and Workflow Optimization
Date: Apr 10th, 2025 12:30 PM
Talk Title: Advancing Digital Health: AI-Driven Interventions for Patient Care and Workflow Optimization
Speaker: Dr. Matthew Louis Mauriello, Assistant Professor, Department of Computer & Information Sciences, University of Delaware
Location: HBK 2105 and Zoom
Abstract: As digital health technologies evolve, new possibilities for enhancing healthcare access and delivery are emerging. In this talk, I will present an overview of my research at the intersection of computer science and digital health, focusing on developing intelligent digital interventions. Specifically, I will discuss therapeutic chatbot systems for patient support, predictive tools for stress and burnout detection, and the integration of large language models for data processing and summarization. These technologies can potentially transform medical systems by enabling novel patient interactions and improving data acquisition while reducing administrative burdens. I will explore key challenges in designing and deploying these systems, including usability, ethical considerations, and the potential for clinical integration. By leveraging human-centered design principles and these emerging technologies, we can develop digital interventions that enhance healthcare efficiency and patient outcomes, ultimately shaping the future of intelligent health technologies.
Bio: Dr. Matthew Louis Mauriello is an Assistant Professor in the Department of Computer & Information Sciences at the University of Delaware, where he directs the Sensify Lab. His research lies at the intersection of human-computer interaction and ubiquitous computing, focusing on digital health, personal informatics, wearables, and AI-driven interventions. His work explores developing and evaluating intelligent systems for patient support, stress and burnout detection, and workflow optimization. Dr. Mauriello has an extensive background in interdisciplinary research, leveraging advances in machine learning, social computing, and information visualization to design and assess interactive health technologies. His research has been supported by the NSF, the Maggie E. Neumann Health Sciences Research Fund, and industry partners. He earned his Ph.D. in Computer Science from the University of Maryland’s Department of Computer Science. He then completed a postdoctoral fellowship at Stanford University’s School of Medicine, where he worked on pervasive well-being technologies. Dr. Mauriello is also an active mentor, educator, and advocate for responsible AI and human-centered design in computing.
BBL Speaker Series: Navigating Bias and Leveraging AI: Exploring the Dual Reality for Users with Disabilities
Date: Apr 17th, 2025 12:30 PM
Talk Title: Navigating Bias and Leveraging AI: Exploring the Dual Reality for Users with Disabilities
Speaker: Dr. Vinitha Gadiraju, Assistant Professor, Department of Computer Science, Wellesley College
Location: HBK 2105 and Zoom
Abstract: Generative AI holds immense potential to revolutionize how we work, communicate, and access information. But are we building a future that includes everyone? In this talk, we will delve into how Large language models (LLMs) trained on real-world data can inadvertently reflect harmful societal biases, particularly toward people with disabilities and other marginalized communities. As we discuss biases, we will characterize the subtle yet harmful stereotypes people with disabilities have encountered that were reinforced by LLM-based chatbots, such as inspiration porn and able-bodied saviors. In contrast, we will also examine the creative and resourceful ways people with disabilities leverage these tools, how chatbots fit into their technological ecosystem, and their desires for the next iteration of generative AI tools. Finally, we will contemplate the role of chatbots in common and high-risk use cases in the context of previous foundational research and disability justice principles.
Bio: Dr. Vinitha Gadiraju is an Assistant Professor in the Department of Computer Science at Wellesley College. She has a Ph.D. in Computer Science from the University of Colorado Boulder Department of Computer Science, where she investigated and designed accessible, collaborative educational tools for visually impaired children and their social networks (supported by the National Science Foundation Graduate Research Fellowship). Dr. Gadiraju’s lab at Wellesley College now focuses on studying how adults with disabilities interact and form relationships with Large Language Model-based chatbots and the harms and benefits that arise during these experiences. She is a 2024 Google Research Scholar Recipient and publishes in leading HCI, AI, and accessibility research venues such as CHI, FAccT, and ASSETS.BBL Speaker Series: Tool-making, Accessibility, and Interactive Data Experiences
Date: May 1st, 2025 12:30 PM
Talk Title: Tool-making, Accessibility, and Interactive Data Experiences
Speaker: Frank Elavsky, PhD candidate and Researcher, Human-Computer Interaction Institute, Carnegie Mellon University
Location: HBK 2105 and Zoom
Abstract: "Come and join me for a first-ever prototype of my (eventual) job talk! This talk presents practical and research advancements in making interactive data experiences more accessible through a suite of tools and frameworks designed to enhance both the usability and creation of accessible, interactive data experiences. Central to this work is the rethinking of accessibility, focusing not just on the functionality of representations and visualizations but on how the tools and methodologies used to build them can shape accessible outcomes. Frank's research introduces Chartability, a heuristic framework that enables practitioners, especially those with limited accessibility expertise, to evaluate and improve data visualizations across various disabilities. Complementing this, Data Navigator offers a dynamic system that allows designers to build accessible data navigation structures, supporting a variety of input modalities and assistive technologies to ensure inclusive data exploration. The concept of Softerware is introduced to aid tool designers in creating data representation systems that empower end-users with disabilities to personalize and customize their own experiences. Finally, the cross-feelter—a blind-centered data analysis hardware prototype—is presented, showcasing a tactile input device that significantly enhances how blind users explore complex relationships in linked data interfaces. Together, these contributions emphasize the importance of tools and toolmaking in creating accessible, inclusive, and customizable data interactions."
Bio: "Frank is a PhD candidate and researcher at the Human-Computer Interaction Institute at Carnegie Mellon University. His work explores the intersection of interactive data visualization, accessibility, and tooling as an intervention in the design process. Frank has collaborated with companies such as Apple's Human-Centered Machine Intelligence research group, Adobe, Microsoft, Visa, and Highcharts. Frank’s contributions focus on reimagining accessibility as an integral part of the design and tool-making process, enabling data analysts and designers to build interfaces that proactively empower people with disabilities. His work bridges the gap between technical innovation and disability-centered design, transforming traditional approaches to accessibility into dynamic social and technical interventions that enhance both data exploration and interaction."
Previous Events