Offline-first | AI-powered | Adaptive learning

From scattered notes to a personalized learning roadmap.

Geeky transforms user-captured content (text, links, files, images, and shared media) into bite-sized Shorts, maps them into a Knowledge Graph, and adapts recommendations after every interaction.

0

Feature modules

0

Drift tables

0

Data access objects

0

Core pipeline stages

Current status

Flutter app is feature-rich; Python backend is in progress.

Frontend includes offline persistence, module architecture, premium gates, and seeded local data. Backend services for AI processing and RAG are scaffolded and actively being implemented.

Flutter + Riverpod 3 Drift + SQLite FastAPI + Redis Firebase + Gemini

Why this exists

Geeky solves six practical learning problems

Content scatter

Learning material is spread across apps and formats; Geeky centralizes it into one structured workflow.

Information overload

Deduplication and scoring reduce repeated or low-value content so users focus on what matters.

Passive consumption

Shorts, quizzes, and interactions turn reading and watching into active learning.

No adaptive path

Recommendation scores rebalance relevance, capability, and novelty per learner.

Retention decay

Spaced repetition with FSRS (20–30% fewer reviews than SM-2) targets 90% retention.

Weak concept structure

A graph-based model connects prerequisites, related concepts, and deeper paths.

Product scope

Key capabilities already designed into the system

01

Multimedia ingestion

Accepts notes from text, links, files, images, audio/video workflows, and mobile sharing flows.

02

AI Shorts and source-grounded summaries

Processes content into concise Shorts with topic tags, difficulty estimates, and follow-up prompts.

03

Knowledge Graph navigation

Supports deeper, broader, next, and related exploration with dynamic edges and hierarchy-aware traversal.

04

Adaptive learning engine

Uses a multi-factor score (relevance, capability, novelty) and interaction history to reorder recommendations.

05

RAG and semantic discovery

Hybrid retrieval, reranking, and grounded response generation over user-scoped content.

06

Offline-first experience

Local-first reads and queued interactions maintain usability while disconnected, then sync on reconnect.

Pipeline

How a note becomes adaptive learning content

Extract

Normalize content from mixed media into a unified representation.

Chunk + dedup

Split by structure/semantics and remove exact, near, and semantic duplicates.

Embed + generate

Store embeddings, generate Shorts, tag topics, and run consistency-aware checks.

Update graph + roadmap

Refresh concept relationships, module context, and each learner's next-best content.

Adaptive signals

Recommendations respond to behavior and context

Core scoring uses relevance (40%), capability (30%), and novelty (30%), then adjusts with contextual signals documented in the project reports.

Time-of-day

Chronological usage patterns influence what appears next.

Why it matters

The model can prioritize lighter or deeper content depending on session timing and user response rhythms.

Location (coarse)

Optional city/state context can boost local relevance.

Why it matters

Region-specific sources get a moderate priority lift while preserving coverage diversity.

Session duration

Short and long sessions can receive different content pacing.

Why it matters

The roadmap can favor concise items in brief sessions and deeper chains in longer study windows.

Interaction feedback

Read, skip, done, and quiz outcomes continually re-score items.

Why it matters

Personalization does not rely on a one-time profile; it updates after each interaction event.

Difficulty alignment

Capability matching helps avoid content that is too easy or too hard.

Why it matters

Bayesian Knowledge Tracing (BKT) models per-concept mastery, updating after quiz results and interaction speed to calibrate progression.

Deduplication rigor

Four-stage pipeline removes exact, near, semantic, and cross-modal duplicates.

Why it matters

Bloom filters screen fast, then exact hash/MinHash/embeddings ensure no repeated content drains learner attention.

Contextual bandits

Balance exploration (new topics) vs exploitation (high-relevance content).

Why it matters

Continuous learning without pigeonholing: curiosity is rewarded while expertise deepens along high-value paths.

Diagram gallery

SVGs derived from report Mermaid architecture views

Comparisons and profiles

Important tables from the research and architecture docs

From ProposalNew.md: Recommendation paradigm evaluation

Recommendation approach Mechanism Explainability Best fit
Path-based Prerequisite chains High Structured curricula
Graph-based Message passing on knowledge graph Medium Rich concept relationships
Collaborative filtering User-behavior similarity Medium Implicit interaction signals
Hybrid (selected) Path + scoring + contextual balance High Production personalization

From ARCHITECTURE.md: Task-specific RAG retrieval profiles

RAG profile MMR lambda Compression level Output intent
Q&A 0.8 Aggressive Precise answer with citations
Flashcard generation 0.5 Moderate Coverage across sources
Summary 0.6 Light Narrative synthesis
Mind map 0.4 Heavy Maximum concept diversity

Access model

Free and Premium are clearly separated

Capability Free Premium
Note ingestion and note feed Full access Full access
Knowledge Graph and RAG query Locked Full access
Quizzes and spaced repetition Locked Full access
Analytics depth Basic Full dashboard
Source limit 3 sources Unlimited
Store module downloads 3 modules Unlimited

Architecture snapshot

Clean boundaries between UI, data, and AI workloads

Frontend (Flutter)

Feature-first modules, Riverpod state management, GoRouter navigation, Drift local database, and Material 3 theming.

Backend (Python)

FastAPI services handle AI pipeline orchestration, retrieval workflows, recommendations, and business logic.

Infrastructure

Firestore for app config and curated content, Firebase App Check for attestation, Cloud Run for compute, Redis for jobs and caching, Gemini for generation and embeddings.

Who benefits

Designed for real-world self-directed learning

Students

Turn lectures, links, and notes into reviewable micro-content and revision flows.

Researchers

Organize dense topics into concept relationships, source-backed summaries, and navigable clusters.

Professionals

Maintain continuous upskilling from scattered daily content without losing context or recall.

Project team

Built with academic mentorship

Professor S. Ghayekh portrait

Professor Samira Ghayekhloo

Academic Mentor

ASU Profile

Aakash portrait

Aakash Khepar

Builder of Geeky