SmolLM2: When Smol Goes Big Data-Centric Training of a Small Language Mode
Robust and Secure Code Watermarking for Large Language Models via ML/Crypto Codesign
BitsAI-CR: Automated Code Review via LLM in Practice
[Final Comment]: 변수명 ‘radious’가 오타입니다. ‘radius’로 수정하세요. [Review Summary]: 오타 감지 - ‘radious’를 ‘radius’로 변경 추천.
Qwen2.5-1M Technical Report
Humanity's Last Exam
DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning
DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding
JanusFlow: Harmonizing Autoregression and Rectified Flow for Unified Multimodal Understanding and Generation
Qwen2-VL: Enhancing Vision-Language Model’s Perception of the World at AnyResolution
DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence
DeepSeek-Prover-V1.5: Harnessing Proof Assistant Feedback for Reinforcement Learning and Monte-Carlo Tree Search
Qwen2 Technical Report
Let the Expert Stick to His Last: Expert-Specialized Fine-Tuning for Sparse Architectural Large Language Models
DeepSeek-Prover: Advancing Theorem Proving in LLMs through Large-Scale Synthetic Data
DeepSeek-VL: Towards Real-World Vision-Language Understanding
How to Train Data-Efficient LLMs
DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models
DeepSeek-Coder: When the Large Language Model Meets Programming - The Rise of Code Intelligence
DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models
DeepSeek LLM: Scaling Open-Source Language Models with Longtermism
Qwen-Audio: Advancing Universal Audio Understanding via Unified Large-Scale Audio-Language Models
Qwen Technical Report
Qwen-VL: AVersatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Janus:DecouplingVisualEncoding for Unified Multimodal Understanding and Generation
DeepSeek-V3 Technical Report
Qwen2.5 Technical Report
Fast State Restoration in LLM Serving with HCache
Compressed Context Memory For Online Language Model Interaction
A Hardware Evaluation Framework for Large Language Model Inference
TokenRing: An Efficient Parallelism Framework for Infinite-Context LLMs via Bidirectional Communication
DynamicKV: Task-Aware Adaptive KV Cache Compression for Long Context LLMs
TAIPAN: EFFICIENT AND EXPRESSIVE STATE SPACE LANGUAGE MODELS WITH SELECTIVE ATTENTION
SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs
AIOS: LLM Agent Operating System
SANA: EFFICIENT HIGH-RESOLUTION IMAGE SYN THESIS WITH LINEAR DIFFUSION TRANSFORMERS
Block Transformer: Global-to-Local Language Modeling for Fast Inference
FLAME: Factuality-Aware Alignment for Large Language Models
MobiLlama: Towards Accurate and Lightweight Fully Transparent GPT
Rethinking Optimization and Architecture for Tiny Language Models
LLM in a flash : Efficient Large Language Model Inference with Limited Memory
Cascade Speculative Drafting for Even Faster LLM Inference
Distributed Inference and Fine-tuning of Large Language Models Over The Internet
Gated Linear Attention Transformers with Hardware-Efficient Training
EE-LLM: Large-Scale Training and Inference of Early-Exit Large Language Models with 3D Parallelism
Agile-Quant: Activation-Guided Quantization for Faster Inference of LLMs on the Edge
SparQ Attention: Bandwidth-Efficient LLM Inference
Improving alignment of dialogue agents via targeted human judgements
Language Models are General-Purpose Interfaces
OPT: Open Pre-trained Transformer Language Models
CBQ: Cross-Block Quantization for Large Language Models
SCCA: Shifted Cross Chunk Attention for long contextual semantic expansion
LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding
Gemma: Open Models Based on Gemini Research and Technology
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
The Truth is in There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction
Abseil Tip 234 값, 포인터, 참조로 전달하기
아래는 “이번 주의 팁 #234: 값, 포인터, 참조로 전달하기”에 대한 한글 번역입니다.
Abseil Tip 232 변수 선언 시 auto를 언제 사용할 것인가
아래는 “이번 주의 팁 #232: 변수 선언 시 auto
를 언제 사용할 것인가”에 대한 한글 번역입니다.
Abseil Tip 231 여기와 저기 사이 – 간과되기 쉬운 몇 가지 알고리즘
아래는 “이번 주의 팁 #231: 여기와 저기 사이 – 간과되기 쉬운 몇 가지 알고리즘”에 대한 한글 번역입니다.
SAGEATTENTION: ACCURATE 8-BIT ATTENTION FOR PLUG-AND-PLAY INFERENCE ACCELERATION
Gemma 2: Improving Open Language Models at a Practical Size
The Llama 3 Herd of Models
Mooncake: A KVCache-centric Disaggregated Architecture for LLM Serving
Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality
Communication Compression for Tensor Parallel LLM Inference
Context Parallelism for Scalable Million-Token Inference
SimpleFSDP: Simpler Fully Sharded Data Parallel with torch.compile
FastAttention: Extend FlashAttention2 to NPUs and Low-resource GPUs
Large Concept Models: Language Modeling in a Sentence Representation Space
Sharing and Throughput-oriented Token Batching
ClusterKV: Manipulating LLM KV Cache in Semantic Space for Recallable Compression
Star Attention: Efficient LLM Inference over Long Sequences
Efficient LLM Inference with I/O-Aware Partial KV Cache Recomputation
SparseInfer: Training-free Prediction of Activation Sparsity for Fast LLM Inference
SageAttention2 Technical Report: Accurate 4 Bit Attention for Plug-and-play Inference Acceleration
Byte Latent Transformer: Patches Scale Better Than Tokens
Memory Layers at Scale
Efficient Memory Management for Large Language Model Serving with PagedAttention
Orca: Progressive Learning from Complex Explanation Traces of GPT-4
GSPMD: General and Scalable Parallelization for ML Computation Graphs
GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding
Abseil Tip 229 템플릿 메타프로그래밍을 위한 순위 기반 오버로드
제목: “이번 주의 팁 #229: 템플릿 메타프로그래밍을 위한 순위 기반 오버로드”
Abseil Tip 227 빈 컨테이너와 부호 없는 정수 연산 주의하기
제목: “이번 주의 팁 #227: 빈 컨테이너와 부호 없는 정수 연산 주의하기”
Abseil Tip 224 vector.at() 사용 피하기
제목: “이번 주의 팁 #224: vector.at()
사용 피하기”
Abseil Tip 197 Reader Lock은 드물게 사용해야 합니다
제목: “이번 주의 팁 #197: Reader Lock은 드물게 사용해야 합니다”
FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-precision
Infinite-LLM: Efficient LLM Service for Long Context with DistAttention and Distributed KVCache
Fast Inference of Mixture-of-Experts Language Models with Offloading
SuperServe: Fine-Grained Inference Serving for Unpredictable Workloads
FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Abseil Tip 3 문자열 연결과 operator+ vs. StrCat()
제목: “이번 주의 팁 #3: 문자열 연결과 operator+
vs. StrCat()
”
Abseil Tip 218 FTADLE로 확장 지점 설계하기
제목: “이번 주의 팁 #218: FTADLE로 확장 지점 설계하기”
Abseil Tip 215 AbslStringify()를 사용한 사용자 정의 타입 문자열화"
제목: “이번 주의 팁 #215: AbslStringify()
를 사용한 사용자 정의 타입 문자열화”
Abseil Tip 198 태그 타입(Tag Types)
아래는 “이번 주의 팁 #198: 태그 타입(Tag Types)”에 대한 한글 번역입니다.
Abseil Tip 18 Substitute를 활용한 문자열 포맷팅
물론입니다! 아래는 번역된 내용입니다:
Abseil Tip 124 absl::StrFormat()
제목: “이번 주의 팁 #124: absl::StrFormat()
”