Skip to content
View starship006's full-sized avatar

Highlights

  • Pro

Block or report starship006

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
starship006/README.md

hi there! go check out my website at: starship006.github.io

Hi, I’m Cody! Human first, philosopher and computer scientist second.

I work at Redwood Research on AI Security and Safety. In the past, I've done Value Alignment Research with Brad Knox and AI Mechanistic Interpretability research under Neel Nanda. I got my Bachelor's in Computer Science from UT Austin in Fall of 2024.

Nowadays, most of my time is spent thinking about the future and how we can develop safe, controlled Artificial General Intelligence. Besides all that, I'm working on being a better writer, basketball player, thinker, and friend.

Pinned Loading

  1. backup_research backup_research Public

    Interpretability Research into the Self-Repair phenomena in Transformer Models; Accepted to ICML 2024; Accepted to SeT LLM @ ICLR 2024 Workshop | Oral

    Jupyter Notebook 5 1

  2. ARENA-work ARENA-work Public

    Cody Rushing's repository to hold exercises and open ended projects from the 2022 Virtual ARENA program

    Jupyter Notebook 1

  3. AddieFoote/rl-final-project AddieFoote/rl-final-project Public

    An empirical study of goal-conditioned RL

    Python 2