Tesla
Consider before submitting an application: This position is expected to start around January 2025 and continue through the entire Winter term (i.e. through May 2025) or into Summer 2025 if available. We ask for a minimum of 12 weeks, full-time and on-site, for most internships. International Students: If your work authorization is through CPT, please consult your school on your ability to work 40 hours per week before applying. You must be able to work 40 hours per week on-site. Many students will be limited to part-time during the academic year. About the TeamIn this role, you will be responsible for the internal working of the AI inference stack and compiler running neural networks in millions of Tesla vehicles and Optimus. You will collaborate closely with the AI Engineers and Hardware Engineers to understand the full inference stack and design the compiler to extract the maximum performance out of our hardware. The inference stack development is purpose-driven: deployment and analysis of production models inform the team's direction, and the team's work immediately impacts performance and the ability to deploy more and more complex models. With a cutting-edge co-designed MLIR compiler and runtime architecture, and full control of the hardware, the compiler has access to traditionally unavailable features, that can be leveraged via novel compilation approaches to generate higher performance models.
Tesla
Austin, Texas, United States
Date not specified
On-site
Salary not specified
Tesla
Palo Alto, California, United States
Date not specified
On-site
$120,000 - $300,000
Tesla
Palo Alto, California, United States
Date not specified
On-site
$120,000 - $252,000