-
Notifications
You must be signed in to change notification settings - Fork 3
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Rohan Sinha
committed
Apr 30, 2024
1 parent
5210b61
commit 71c12e4
Showing
3 changed files
with
22 additions
and
11 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
--- | ||
speaker: Student Speaker 1 -- Somrita Banerjee | ||
affiliation: Stanford | ||
website: "https://stanfordasl.github.io//people/somrita-banerjee/" | ||
date: 2024-05-03T12:30:00-0000 | ||
location: Skilling Auditorium | ||
location-url: "https://campus-map.stanford.edu/?srch=Skillaud" | ||
title: "Learning-enabled Adaptation to Evolving Conditions for Robotics" | ||
abstract: "With advancements in machine learning and artificial intelligence, a new generation of “learning-enabled” robots is emerging, which are better suited to operating autonomously in unstructured, uncertain, and unforgiving environments. To achieve these goals, robots must be able to adapt to evolving conditions that are different from those seen during training or expected during deployment. In this talk I will first talk about adapting to novel instantiations, i.e., different task instances with shared structure, through parameter adaptation. Such adaptation is done passively, by augmenting physics-based models with learned models, with our key contribution being that the interpretability of physical parameters is retained, allowing us to monitor adaptation. Second, I will talk about a framework for active adaptation where the model monitors its own performance and curates a diverse subset of uncertain inputs to be used for periodic fine-tuning of the model, improving performance over the full data lifecycle." | ||
youtube-code: "TBD" | ||
--- |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
--- | ||
speaker: Student Speaker 2 -- Elliot Weiss | ||
affiliation: Stanford | ||
website: "https://ddl.stanford.edu/people/elliot-weiss" | ||
date: 2024-05-03T12:30:00-0000 | ||
location: Skilling Auditorium | ||
location-url: "https://campus-map.stanford.edu/?srch=Skillaud" | ||
title: "Wearing a VR Headset While Driving to Improve Vehicle Safety" | ||
abstract: "Driver assistance systems hold the promise of improving safety on the road. We are particularly interested in developing new assistance systems that smoothly share control with the driver and testing them in a wide range of driving conditions. Given the central role of the driver in a shared control system, it is critical to elicit natural driving behavior during tests. This talk discusses the development of a flexible driving simulation platform that can be used for safe and immersive shared control testing. Our platform, known as "Vehicle-in-the-Loop", enables experiments on a real vehicle within a simulated traffic scenario viewed by the driver in a virtual reality headset. By implementing this platform around a four-wheel steer-by-wire vehicle, the driver can interact with shared control systems in a variety of test conditions – including low friction and highway speed driving – all on one vehicle platform and at one proving ground. " | ||
youtube-code: "TBD" | ||
--- |
This file was deleted.
Oops, something went wrong.