GEO Academy

Certified GEO
Practitioner

The comprehensive GEO course by TrueSource AI. 6 modules. Practice-oriented. Tool-agnostic. Learn to make brands visible to ChatGPT, Perplexity and Gemini.

Start course →
6
Modules
~40
Lessons
31
Scientific Sources
30
Exam Questions

Curriculum

6 Modules — From Strategy to Execution

Each module builds on the previous one. From strategic context through technical implementation to an independent practice audit.

Certification

What You Get

📋

Knowledge Test

30 multiple-choice questions, 60 minutes, 70% pass rate.

🔍

Practice Audit

Independent GEO audit of a real website with management roadmap.

🎖️

Certificate & Badge

Digital PDF certificate + verifiable LinkedIn badge.

Target Audiences

SEO Specialists

You already optimize for Google — now extend your skills to ChatGPT, Perplexity and Gemini. GEO will become the most important addition to your skillset.

Marketing Managers

You're responsible for your brand's digital visibility. This program gives you the framework to establish AI visibility as a strategic channel.

Agency Teams

Position your agency as a GEO expert. The Agency License (5 seats) enables training for your entire team.

CTOs & Tech Leads

Understand the technical requirements (JSON-LD, robots.txt, Edge Routing) and integrate GEO into your development pipeline.

Frequently Asked Questions

Do I need programming skills?
No. The course is structured so that you can understand the concepts without a coding background. Code examples are explained and can be copied.
How long do I have access?
Access to the course modules is unlimited. We recommend completing it in 2-4 weeks.
What happens if I don't pass the exam?
You can retake the exam after a 7-day waiting period. Questions are randomly drawn from an extended pool.
What exactly do I get after completion?
You receive a digital completion certificate from TrueSource AI as a PDF. This confirms that you have successfully completed the course and passed the exam. It is a course completion certificate, not a government-recognized certification.
Is there a re-certification?
Yes, annually (€149). You receive an update module with the latest developments and a renewed badge.

31 Scientific Sources

This course is based on current computer science research, LLM architecture studies and established web standards. All arXiv sources were verified on March 14, 2026.

Part 1: Generative Engine Optimization & Search Paradigms (10 Papers)
  1. Aggarwal, P. et al. (Princeton University), 2023. "GEO: Generative Engine Optimization". arXiv:2311.09735The foundation paper that coined the term GEO.
  2. Chen, M. et al., 2025. "Generative Engine Optimization: How to Dominate AI Search". arXiv:2509.08919
  3. Zhang, F. et al. (Pinterest AI Labs), 2026. "Generative Engine Optimization: A VLM and Agent Framework for Pinterest Acquisition Growth". arXiv:2602.02961
  4. Bagga, P. S. et al., 2025. "E-GEO: A Testbed for Generative Engine Optimization in E-Commerce". arXiv:2511.20867
  5. Zhuang, Y. et al., 2024. "Adversarial Search Engine Optimization for Large Language Models". arXiv:2406.18382
  6. Bardas, N. et al., 2025. "White Hat Search Engine Optimization using Large Language Models". arXiv:2502.07315
  7. Su, Y. et al. (Carnegie Mellon University), 2025. "What Generative Search Engines Like and How to Optimize Web Content Cooperatively". arXiv:2510.11438
  8. Anon., 2025. "Caption Injection for Optimization in Generative Search Engine". arXiv:2511.04080
  9. Del Tredici, M. et al., 2025. "C-SEO Bench: Does Conversational SEO Work?". arXiv:2506.11097
  10. Anon., 2025. "AI Answer Engine Citation Behavior: An Empirical Analysis of the GEO-16 Framework". arXiv:2509.10762
Part 2: Retrieval-Augmented Generation (RAG) (7 Papers)
  1. Lewis, P. et al. (Facebook AI), 2020. "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks". arXiv:2005.11401The foundational paper on RAG architecture.
  2. Gao, Y. et al., 2023. "Retrieval-Augmented Generation for Large Language Models: A Survey". arXiv:2312.10997
  3. Guu, K. et al. (Google Research), 2020. "REALM: Retrieval-Augmented Language Model Pre-training". arXiv:2002.08909
  4. Nakano, R. et al. (OpenAI), 2021. "WebGPT: Browser-assisted question-answering with human feedback". arXiv:2112.09332
  5. Borgeaud, S. et al. (DeepMind), 2022. "Improving language models by retrieving from trillions of tokens" (RETRO). PMLR 162:2206-2240.
  6. Kim, H. et al., 2025. "Rethinking Retrieval-Augmented Generation for Medicine". arXiv:2511.06738
  7. Brown, A. et al., 2025. "A Systematic Literature Review of Retrieval-Augmented Generation". arXiv:2508.06401
Part 3: Semantic Web, Structured Data & JSON-LD (5 Standards)
  1. Howard, Jeremy (Answer.AI), 2024. "/llms.txt — a proposal to provide information to help LLMs use websites". llmstxt.org
  2. W3C Consortium, 2020. "JSON-LD 1.1: A JSON-based Serialization for Linked Data". W3C Recommendation.
  3. Guha, R. V., Brickley, D. (2016). "Schema.org: Evolution of Structured Data on the Web". Communications of the ACM, Vol 59, Issue 2.
  4. Zhu, Y. et al., 2025. "LLM-empowered Knowledge Graph Construction: A Survey". arXiv:2510.20345
  5. Ubl, Malte (Vercel CTO), 2025. "Proposal for inline LLM instructions in HTML based on llms.txt". Vercel Engineering Blog.
Part 4: AI Crawlers, Protocols & Middleware (5 Specs)
  1. Koster, M. et al. (2022). "Robots Exclusion Protocol". IETF RFC 9309.
  2. Tornese, L. et al., 2025. "Is Misinformation More Open? A Study of robots.txt Gatekeeping on the Web". arXiv:2510.1031560% of reputable news sites block AI crawlers, only 9% of misinformation sites.
  3. OpenAI Platform Documentation, 2023. "GPTBot and ChatGPT-User Web Crawler Identification".
  4. Anthropic Developer Documentation, 2024. "ClaudeBot Crawling Specifications".
  5. Cloudflare Developer Docs, 2024. "Declaring your AI bot policies & Bot Management".
Part 5: LLM Cognitive Behavior & Trust (4 Papers)
  1. Ji, Z. et al., 2023. "Survey of Hallucination in Natural Language Generation". ACM Computing Surveys, Vol 55. DOI: 10.1145/3571730.
  2. Vaswani, A. et al. (Google Brain), 2017. "Attention Is All You Need". arXiv:1706.03762The foundational paper of the Transformer architecture.
  3. Ouyang, L. et al. (OpenAI), 2022. "Training language models to follow instructions with human feedback" (InstructGPT). arXiv:2203.02155
  4. Schick, T. et al. (Meta AI), 2023. "Toolformer: Language Models Can Teach Themselves to Use Tools". arXiv:2302.04761

Ready to start?

Begin with Module 1 and become a Certified GEO Practitioner.

Start Module 1 →