Working with Models
Tower provides flexible options for running LLM inference, supporting both local development with free inference and production deployments with serverless providers.
Analyzing GitHub Issues with LLMs
This example demonstrates how to acquire data from external data sources and feed it into language models to extract insights. The pipeline:
- Fetches a GitHub issue and its comments using the GitHub API
- Formats the thread as a conversation
- Sends it to DeepSeek R1 for analysis and recommendations
- Writes the results to an Iceberg table
Inference options:
- Local development: Use ollama for free local inference on your GPU
- Production: Use serverless inference via Hugging Face Hub (e.g., Together.ai)
Highlights: ollama, DeepSeek R1, Hugging Face Hub, Together.ai, GitHub API, Iceberg