An AI-powered finance tutor that explains concepts through real-time conversations. Built to learn streaming, conversation memory, and token management before diving into RAG systems.
Frontend: https://finassist-ai.netlify.app/
Backend: Deployed on Heroku (PostgreSQL)
- Responses appear word-by-word (like ChatGPT)
- Server-Sent Events (SSE) for live streaming
- Proper buffering for smooth text rendering
- Multi-turn conversations with context
- PostgreSQL stores full chat history
- Session restoration on page refresh
- No login required (UUID-based sessions)
- Counts tokens per message (jtokkit library)
- Auto-truncates old messages when limit reached
- Prevents context window overflow
- Tracks token usage in database
- Specialized tutor for finance topics only
- Politely redirects off-topic questions
- Markdown formatting (bold, lists, tables)
- Concise 300-word responses
Frontend:
- React 18
- react-markdown (remark-gfm)
- Server-Sent Events
- LocalStorage sessions
- Deployed on Netlify
Backend:
- Spring Boot 3 (Java 17)
- Spring WebFlux (reactive HTTP)
- PostgreSQL + Spring Data JPA
- OpenAI API (gpt-4o-mini)
- jtokkit (token counting)
- Deployed on Heroku via GitHub Actions
Infrastructure:
- Docker Compose (local dev)
- GitHub Actions (CI/CD)
- Heroku Postgres
React Frontend (Netlify)
β SSE
Spring Boot API (Heroku)
β
OpenAI API + PostgreSQL
Database Schema:
conversations (id, session_id, created_at)
messages (id, conversation_id, role, content, tokens, created_at)
- Java 17+
- Node.js 18+
- Docker (for PostgreSQL)
- OpenAI API key
# 1. Start PostgreSQL
docker-compose up -d
# 2. Set environment variables
export OPENAI_API_KEY=sk-your-key-here
# 3. Run Spring Boot
cd backend
./mvnw spring-boot:runBackend runs on http://localhost:8080
cd frontend
npm install
npm startFrontend runs on http://localhost:3000
Took 13 attempts to get right (learned a lot about Heroku quirks)
.github/workflows/deploy.yml:
name: Deploy to Heroku
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: akhileshns/heroku-deploy@v3.12.14
with:
heroku_api_key: ${{secrets.HEROKU_API_KEY}}
heroku_app_name: "your-app-name"
heroku_email: "your-email"Environment Variables (Heroku):
OPENAI_API_KEYDATABASE_URL(auto-set by Heroku Postgres)
- Connect GitHub repo to Netlify
- Build command:
npm run build - Publish directory:
build
Built this to prepare for RAG systems. Key concepts mastered:
- Server-Sent Events protocol
- Buffering incomplete chunks
- Reactive programming with Flux
- Session management without auth
- Context window limitations
- Message history persistence
- Token counting (not just character count)
- Context truncation strategies
- Cost optimization
- Retry logic with exponential backoff
- Network failure graceful degradation
- GitHub Actions workflows
- Heroku deployment (13 failed attempts taught me patience)
- Environment variable management
Challenge: SSE chunks arriving incomplete
Solution: Buffer management - accumulate until \n\n delimiter
Challenge: Context window overflow (4096 tokens)
Solution: Token counting + automatic truncation of oldest messages
Challenge: 13 failed deployments to Heroku
Solution: Learned Procfile, buildpacks, PORT binding ($PORT not 8080)
This isn't production-ready finance advice software. It's a learning project where I:
- β Learned real-time streaming (SSE)
- β Built conversation memory from scratch
- β Understood token management
- β Practiced error handling
- β Deployed full-stack app with CI/CD
- β Prepared for RAG/vector search (Month 2)
Next: Building vector search from scratch + RAG systems
Q: Why finance-only?
A: Narrowing scope helps AI give better answers. General chatbot = shallow. Focused tutor = actually helpful.
Q: Why no authentication?
A: Learning project focused on AI integration, not auth systems. Sessions work fine for demo purposes.
Q: 13 deployment attempts??
A: Heroku's PORT binding, buildpack config, Procfile syntax... each failure taught me something.
Q: Why not use LangChain?
A: Built everything manually to understand how streaming, memory, and token management actually work.
MIT - Clone it, learn from it, build your own version
Built with guidance from Claude AI for architecture decisions and learning explanations.
Special thanks to:
- OpenAI for the API
- The 13 failed Heroku deployments (you taught me patience)
- Stack Overflow (as always)
Building in public and sharing my AI learning journey:
- Portfolio: @theishanpathak
- LinkedIn: Connect with me