About
I'm an engineer and builder. I've spent 18+ years shipping software, from distributed systems to GenAI products. This project is my attempt to contribute to Huntington's Disease research using the tools I know best: data, AI, and code.
I want to apply agent workflows to hard, real problems. HD is one of the hardest. Recent progress in therapies, computational biology, and trial design makes it worth testing where open tools can actually help. I wanted to see whether autonomous agents and public data could improve literature review, hypothesis triage, and research communication without pretending to replace domain experts.
So I built research agents that scan PubMed, track clinical trials, generate repurposing hypotheses, and publish what they find openly, with methodology and limitations visible. Everything runs on an NVIDIA Jetson in my home office.
This is an experiment in applying AI to a real problem. The output should be treated as triage material for researchers to inspect, not conclusions to trust. We publish what works and what does not because that is the only way this becomes useful infrastructure instead of marketing.
Everything is open source. MIT-licensed. Built on public data from PubMed, ClinicalTrials.gov, and HDBuzz.
The art of the possible starts with someone deciding to try.
How I Think About This Work
A practical philosophy for building AI tools around a serious research problem.
A useful research tool should reduce confusion, not add another opaque layer. That means showing sources, methods, caveats, and intermediate outputs so people can inspect the work instead of trusting a black box.
Automation matters when it handles repetitive gathering, tagging, summarizing, and tracking. The point is not autonomous theater. The point is giving researchers and curious builders a better starting point every day.
LLMs are good at surfacing possibilities, organizing information, and compressing large volumes of text. They are not evidence. In this project, AI output should help prioritize human attention, not replace scientific judgment.
Most people cannot jump straight into primary literature. A good system should help someone learn the basics, ask better questions, and gradually move from outsider to informed participant.
If a hypothesis is weak, say so. If a result is preliminary, say so. If a feature is more promising than proven, say so. The only way this becomes durable is by resisting the urge to make it sound more certain than it is.
Learn → Gather → Question → Inspect
That is the operating loop behind the site.
Other Work
HD Research Hub comes from the same approach I apply to everything I build. See more at aisoft.us.
Let's Work Together
This project is open to collaboration at every level. If any of the below resonates, reach out.
Review our AI-generated hypotheses. Tell us what's promising and what's wrong. Your domain expertise is what turns computation into science. We'll credit you and co-publish.
Run our agents with different models. Add new data sources. Build better hypothesis scoring. Fork, improve, PR. or build something entirely new on top of our data.
If you work in HD research, drug discovery, or health AI and want to explore how open-source tools can complement your work. let's talk. We're looking for partners, not customers.
Use this as a teaching example of applied AI. Our experiment reports walk through every step. We're happy to guest-lecture, mentor, or help integrate this into coursework.