omoruyi atekha

omoruyi atekha

Engineer and Designer in Palo Alto

I am excited by deep problems that hope to re-contextualize how we see and interact with the world and each other. I’m a graduate student at Stanford University, where I research topics in robotic perception and innovation strategy.

Interest

Meta-science, Information design, political philosophy, product design, the intersection of culture and technology, computer vision, human computer and robot interaction, music, digital communities, startups.

Things I like

Essays of Michel De Montaigne, “Seeing Is Forgetting the Name of the Thing One See” - Weschler, "Insert Complicated Title Here" - Virgil, "The Persian letters" - Montesquieu, "Purgatorio" - Dante, Other Internet, Nobells.blog, “In My Mind" - Pharrell Williams

Education

2022 — Now
Stanford, CA

Mechanical Engineering MA, with a focus in Computer Vision and Robotics

2018 — 2022
BA at MIT
Cambridge, MA

BA in Mechanical Engineering. Minor in Art and Design and concentration in Political Science.

Work Experience

2024 — Now
Stanford, CA

Conducted research on advanced robot trajectory prediction, integrating visual occupancy and semantic analysis with diffusion models to improve accuracy in pedestrian environments. Devised and implemented innovative methods to account for pedestrian's visual field and semantics, significantly enhancing model precision. Under the advisement of Dr. Monroe Kennedy in the ArmLab.

2023 — 2023
Research Science Intern at Amazon Robotics
Cambridge, MA

Designed a custom semantic Probability Hypothesis Density (PHD) filter for Object Tracking on autonomous platform for Birds Eye View Detection. Modified PHD algorithm to support semantic attributes from computer vision pipeline. Analyzed performance of filter, compared to learned and naïve approaches.

2023 — 2023
Research Assitant at Stanford University
Palo Alto, CA

Worked in Geometric Computation Group at Stanford. Created an object retrieval system that utilizes OpenAI's CLIP with multi perspective renderings. Designed an algorithm for generalized object placement in 3D environments utilizing Visual Language Models.

2022 — 2023
Cambridge, MA

Worked closely, with the executive team during their acquisition of Napster. I worked closely with the founder and CEO, in terms of product strategy and deployment for 2023-24. Assisted and created the initial draft of the coin design for the Napster token.

2021 — 2022
Stealth Startup
Cambridge, Massachusetts

Co-founded a media startup. During this time I functioned as the product designer of the product for the startup. Before my departure, the startup was accepted into YCombinator and has raised since.

2020 — 2022
Research Engineer at MIT Center for Science and Artificial Intelligence (CSAIL)
Cambridge, MA

Worked on Simultaneous Localization and Mapping (SLAM) in Feature Redundant Environments. Conducted literature review on SLAM and Factor Graphs. Used Intel's D435i and small-scale autonomous vehicle for testing. Modified April SLAM ROS package, to use IMU measurements, as factors for map building.

Created a dataset for swarm robot path planning and position initialization. Developed a machine learning system to predict the initialization (initial pose) of SWARM robotic exploration systems.

2018 — 2021
Research Engineer at MIT Media Lab
Cambridge, MA

Developed various mechanical solutions for flexible silicone electronics. Further designed methods for gallium based wiring. Designed injection molds of plastic and silicon. Prototyped connections between mechanical joints.

Programed software to analyze an images color space. Developed a prototyping tool for designers to generate color palettes, specifically towards event based photos.

Formulated methods for monocular depth image capturing and processing. Utilized computer vision, projection mapping, to create panoramic depth photos.

2021 — 2021
Research Engineer at Toyota Research Institute
Cambridge, MA

Worked at Toyota's Research Institute as a Vehicle Engineering Intern. Designed a system for intersection abstraction for Blind Autonomous Vehicle Passengers. Programmed and designed small-scale autonomous vehicles for testing using ROS.

Developed a cradle attachment for ultrahaptic system to be used by visually impaired individuals. Wrote and performed experimental protocol and methodology for experimentation. Co-authored paper accepted to 2023 HRI.

Writing

2023
Expanded Situational Awareness Without Vision: A Novel Haptic Interface for Use in Fully Autonomous Vehicles, Human-Robot Interaction

This work presents a novel ultrasonic haptic interface to improve nonvisual perception and situational awareness in fully autonomous vehicles. User study results (n=14) suggest comparable performance with the dynamic ultrasonic stimuli versus a control using static embossed stimuli. The utility of the ultrasonic interface coupled with gestural control is demonstrated with an autonomous small-scale robot vehicle on a simplified grid of intersections. These efforts support the application of ultrasonic haptics for improving nonvisual information access in autonomous transportation with strong implications for inclusive design, usability, and human-in-the-loop decision making.

2021
Robotic Exploration, Initialization, Optimization: Leveraging Multi-Robot Exploration Data to Determine the Initial Position

In multirobot path planning, optimizing each robot’s position is important. Machine learning models, such as DARP, are used to optimize these paths and predict future paths given a set of starting positions and obstacles. This ensures complete coverage of the given maps. This paper explores the use of DARP to optimize the initialization and exploration of multirobot systems, and output the best possible initial positions for a set of robots, given a map and its set of obstacles. A dataset, REIOset, of 4000 elements containing, randomized maps, initial values, partition standard deviation, and co-visibility was created. A convolutional neural network (CNN) is then used to train the generated map and position data, to output optimized initial positions, trained on image and parameter data.

Awards

2023

Interact was founded in 2012. Every year since, new Fellows have been welcomed into Interact, which is voluntarily organized and led by former Fellows. This year marks Interact’s eleventh Fellowship cohort.

Over its history, Interacters have built friendships, co-founded ventures, and supported one another in achieving their missions. We share an enthusiasm for making intentional impact and find joy in thoughtful conversations. As the community has grown, one thing has stayed constant: our earnest commitment to making our world better.

2022
Day One Robotics Fellowship from Amazon

The Amazon Robotics Day One Fellowship, a program established to support exceptionally talented students from diverse technical and multicultural backgrounds who are pursuing master of science degrees. The program was developed to suppprt emerging leaders in science from backgrounds underrepresented in STEM, awarding scholarships, mentorship, and career opportunities.

Contact

Twitter