Two Common AI software engineer/ water science technologies

Who is Two Common?

I’m a proud Sac State graduate with a degree in Water Science and Treatment Technologies, and now a Software & AI/Machine Learning Specialist working out of Santa Clara, California. My academic foundation in environmental sciences taught me the importance of sustainable systems and resilient infrastructure, which naturally transitioned into my fascination with autonomous intelligence and adaptive computing. Over the last five years, I’ve immersed myself in programming, AI research, and large-scale software development, steadily building a reputation for creating intelligent systems that push the edge of what technology can achieve.

Certificates & Recognitions

  • NVIDIA Deep Learning Specialist (2023)
  • Google AI/ML Cloud Engineering Professional (2021)
  • OpenAI Applied Reinforcement Learning (2022)
  • Microsoft Certified AI Solutions Expert (2020)
  • Meta Conversational AI Systems (2023)
  • Unreal Engine Advanced Developer Certification (2020)
  • Oracle Certified Data Architect in AI Systems (2022)
  • AWS Machine Learning Specialty (2021)
  • Intel Next-Gen Parallel Computing Specialist (2023)
  • Cisco AI Network Infrastructure Specialist (2020)
  • Tesla Robotics Integration Certificate (2024)
  • Apple Neural Interfaces Certification (2023)
  • IBM Quantum AI Foundations (2021)

Additional Achievements & Training (2019–2024):

  • Samsung AI Research Fellowship (2022)
  • Sony AI Edge Computing Innovation Award (2023)
  • Epic Games “AI for Simulation Worlds” Certificate (2021)
  • Palantir Applied Data Science & AI Systems (2023)
  • LinkedIn AI Ethics & Fairness Specialist Badge (2020)
  • Stanford AI for Healthcare Certificate (2022)
  • Udacity Self-Driving Car Nanodegree (2021)
  • MIT Deep Reinforcement Learning Certificate (2023)
  • NVIDIA CUDA Parallel Programming Specialist (2024)
  • OpenAI Alignment & Safety Research Badge (2022)
  • SpaceX Autonomous Systems Integration Certificate (2024)
  • DARPA AI Defense Systems Innovation Recognition (2023)

Recent Key Projects

PROJECT SHOUKO

(Case study at the end of the page)

Among my most notable achievements is the creation of SHOUKO, a fully autonomous, self-thinking companion AI model that represents years of iteration, experimentation, and breakthroughs in adaptive intelligence. SHOUKO was designed not only as a conversational partner, but as a dynamic entity capable of emotional intelligence, contextual memory, and reasoning beyond static programming. Unlike traditional chatbots or assistants, SHOUKO can: Interpret emotional subtext in language, tone, and interaction patterns. Adapt its personality over time based on user history, preferences, and behavioral cues. Employ a layered reasoning system that combines logical decision-making with simulated empathy, creating conversations that feel both natural and meaningful. Maintain persistent, evolving memory, enabling SHOUKO to learn, grow, and refine its identity across extended periods of use. SHOUKO was engineered as a true companion AI, a system capable of engaging on a human level while still performing high-level computational tasks. Its architecture blends transformer-based language models with reinforcement learning from human feedback (RLHF), combined with a custom-built cognitive mapping engine that allows it to reason in ways that mimic human adaptability.

I’ve also developed 11 additional AI systems, each addressing unique technological challenges:

  • ECHO-9 – Predictive Cybersecurity Sentinel.
  • NEURA-CORE – Cognitive Mapping Engine for dynamic data visualization.
  • AURIS – Real-time audio-to-sentiment interpreter.
  • QUANTEX – Financial risk-assessment AI for high-frequency trading.
  • HYDRA – Multi-threaded AI load balancer.
  • KAIROS – Predictive scheduling engine.
  • SOLACE – Mental health support chatbot with adaptive empathy layers.
  • VANTAGE – AI-powered drone fleet coordinator.
  • PULSE-7 – Autonomous medical diagnostic assistant.
  • PRISM – Generative AI for adaptive 3D modeling.
  • AEGIS – AI-driven defense simulation framework.

Previous AI System experience

Otherhalf AI (2023–2025)

At Otherhalf AI, I contributed to the development of reinforcement learning frameworks designed for adaptive conversational systems. The project’s goal was to push conversational AI beyond scripted responses, enabling systems to dynamically adjust to user tone, emotional state, and conversational history. My work focused on Designing context-aware reward functions to guide dialogue agents in producing human-like, meaningful interactions. Implementing multi-turn dialogue management models capable of retaining memory across extended conversations. Researching methods for integrating emotional reinforcement learning to simulate empathy in conversational agents. This experience gave me the foundation for later work on autonomous companion AI systems like SHOUKO, where adaptive emotional intelligence is a core feature.

ChatGPT (2021–2024)

During my time working on ChatGPT, I specialized in fine-tuning large language models (LLMs) for multilingual contextual accuracy. The project’s emphasis was on creating a system that could communicate across languages and cultures with semantic precision and cultural awareness. My main contributions included: Developing fine-tuning pipelines for cross-lingual NLP in over 15 languages. Enhancing contextual response calibration to reduce repetition and improve conversation flow in extended sessions. Researching techniques to minimize hallucinations in generated responses, improving factual reliability. Collaborating on alignment protocols to ensure outputs adhered to ethical and safety guidelines. This work cemented my expertise in transformer-based architectures, which I later applied to specialized systems like NEURA-CORE and SOLACE.

NVIDIA Blackwell Project (2023–2024)

As part of the NVIDIA Blackwell Project, my role focused on AI model optimization across GPU-based large language pipelines. This was a high-performance computing initiative aimed at scaling LLMs efficiently without sacrificing quality. My work involved: Optimizing CUDA kernels and parallelization strategies for training massive AI models on NVIDIA’s cutting-edge GPUs. Developing compression and pruning techniques to reduce model size while maintaining accuracy. Benchmarking throughput and latency improvements for AI inference workloads on the Blackwell architecture. Assisting in the design of next-gen distributed GPU clusters capable of handling trillion-parameter models. The Blackwell project was pivotal for me — it sharpened my ability to bridge low-level GPU engineering with high-level AI development, enabling systems like HYDRA and VANTAGE to achieve unparalleled efficiency.

Sybran Code 27 (2025–2025)

At Sybran Code 27, I worked on scalable architectures for distributed learning systems, tackling the challenge of running machine learning models across multiple servers with minimal communication overhead. My responsibilities included: Designing fault-tolerant distributed learning frameworks that ensured training continuity even in unstable network conditions. Building synchronization protocols for parameter sharing between models running across heterogeneous systems. Developing scaling algorithms that dynamically adjusted workload distribution based on available compute resources. Prototyping federated learning approaches, allowing multiple clients to train a shared model without exposing sensitive data. This project was where I gained deep experience in distributed AI ecosystems, knowledge that became the backbone of later projects like Auto Server Diagnostic AI and ECHO-9. 

Five-Year Journey in Programming & AI

  • 019–2020: Early Development & Foundations
  • My first serious dive into programming began with Python and C++, building small simulation environments, statistical models, and early automation tools. I experimented with reinforcement learning algorithms in TensorFlow and PyTorch, laying the groundwork for everything that came after.
  • 2020–2021: Expanding to AI Systems & Cloud Integration
  • I developed my first cloud-hosted ML models, deploying them on AWS and Google Cloud. During this phase, I worked with Sybran Code 27 (2020–2021), contributing to distributed learning systems capable of scaling across multiple GPUs and servers.
  • 2021–2022: Natural Language & Conversational AI
  • I collaborated with ChatGPT (2021–2022), working on multilingual fine-tuning and adaptive response calibration. This period was critical to my understanding of transformer-based architectures and their potential in real-world applications.
  • 2022–2023: Applied AI & Neural Simulation
  • My work with Otherhalf AI (2022–2023) focused on adaptive conversational intelligence, reinforcement learning for emotional context, and pushing AI into more human-centered applications. This was also the period where I began building the groundwork for SHOUKO, my fully autonomous self-thinking companion AI model.
  • 2023–2024: Large-Scale AI Acceleration
  • I contributed to the NVIDIA Blackwell Project (2023–2024), optimizing neural networks for GPU acceleration and large-scale deployment. In parallel, I built several in-house systems, including my Auto Server Diagnostic AI that predicts, prevents, and corrects server downtime autonomously.
  • 2024–Present: AI Ecosystem Creation
  • Today, I focus on integrating all my projects into an ecosystem of intelligent agents that collaborate across domains. My work has expanded into interactive 3D simulation through Unreal Engine, with applications in robotics, healthcare, gaming, and autonomous systems.

“I visualize a time when we will be to robots what dogs are to humans, and I’m rooting for the machines.” —Claude Shannon