Reducing LLM Agent Pipeline Token Costs by 50%: A Practical Comparison of Summary Agent vs. Chunk Injection vs. Prompt Caching | DEV BAK - 기술블로그