Block Transformer: Global-to-Local Language Modeling for Fast Inference
FLAME: Factuality-Aware Alignment for Large Language Models
MobiLlama: Towards Accurate and Lightweight Fully Transparent GPT
Rethinking Optimization and Architecture for Tiny Language Models
LLM in a flash : Efficient Large Language Model Inference with Limited Memory
Cascade Speculative Drafting for Even Faster LLM Inference
Distributed Inference and Fine-tuning of Large Language Models Over The Internet
Gated Linear Attention Transformers with Hardware-Efficient Training
EE-LLM: Large-Scale Training and Inference of Early-Exit Large Language Models with 3D Parallelism
Agile-Quant: Activation-Guided Quantization for Faster Inference of LLMs on the Edge
SparQ Attention: Bandwidth-Efficient LLM Inference
Improving alignment of dialogue agents via targeted human judgements
Language Models are General-Purpose Interfaces
OPT: Open Pre-trained Transformer Language Models
CBQ: Cross-Block Quantization for Large Language Models
SCCA: Shifted Cross Chunk Attention for long contextual semantic expansion
LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding
Gemma: Open Models Based on Gemini Research and Technology
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
The Truth is in There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction
Abseil Tip 234 값, 포인터, 참조로 전달하기
아래는 “이번 주의 팁 #234: 값, 포인터, 참조로 전달하기”에 대한 한글 번역입니다.
Abseil Tip 232 변수 선언 시 auto를 언제 사용할 것인가
아래는 “이번 주의 팁 #232: 변수 선언 시 auto
를 언제 사용할 것인가”에 대한 한글 번역입니다.
Abseil Tip 231 여기와 저기 사이 – 간과되기 쉬운 몇 가지 알고리즘
아래는 “이번 주의 팁 #231: 여기와 저기 사이 – 간과되기 쉬운 몇 가지 알고리즘”에 대한 한글 번역입니다.
SAGEATTENTION: ACCURATE 8-BIT ATTENTION FOR PLUG-AND-PLAY INFERENCE ACCELERATION
Gemma 2: Improving Open Language Models at a Practical Size
The Llama 3 Herd of Models
Mooncake: A KVCache-centric Disaggregated Architecture for LLM Serving
Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality
Communication Compression for Tensor Parallel LLM Inference
Context Parallelism for Scalable Million-Token Inference
SimpleFSDP: Simpler Fully Sharded Data Parallel with torch.compile
FastAttention: Extend FlashAttention2 to NPUs and Low-resource GPUs
Large Concept Models: Language Modeling in a Sentence Representation Space
Sharing and Throughput-oriented Token Batching
ClusterKV: Manipulating LLM KV Cache in Semantic Space for Recallable Compression
Star Attention: Efficient LLM Inference over Long Sequences
Efficient LLM Inference with I/O-Aware Partial KV Cache Recomputation
SparseInfer: Training-free Prediction of Activation Sparsity for Fast LLM Inference
SageAttention2 Technical Report: Accurate 4 Bit Attention for Plug-and-play Inference Acceleration
Byte Latent Transformer: Patches Scale Better Than Tokens
Memory Layers at Scale
Efficient Memory Management for Large Language Model Serving with PagedAttention
Orca: Progressive Learning from Complex Explanation Traces of GPT-4
GSPMD: General and Scalable Parallelization for ML Computation Graphs
GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding
Abseil Tip 229 템플릿 메타프로그래밍을 위한 순위 기반 오버로드
제목: “이번 주의 팁 #229: 템플릿 메타프로그래밍을 위한 순위 기반 오버로드”
Abseil Tip 227 빈 컨테이너와 부호 없는 정수 연산 주의하기
제목: “이번 주의 팁 #227: 빈 컨테이너와 부호 없는 정수 연산 주의하기”
Abseil Tip 224 vector.at() 사용 피하기
제목: “이번 주의 팁 #224: vector.at()
사용 피하기”
Abseil Tip 197 Reader Lock은 드물게 사용해야 합니다
제목: “이번 주의 팁 #197: Reader Lock은 드물게 사용해야 합니다”
FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-precision
Infinite-LLM: Efficient LLM Service for Long Context with DistAttention and Distributed KVCache
Fast Inference of Mixture-of-Experts Language Models with Offloading
SuperServe: Fine-Grained Inference Serving for Unpredictable Workloads
FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Abseil Tip 3 문자열 연결과 operator+ vs. StrCat()
제목: “이번 주의 팁 #3: 문자열 연결과 operator+
vs. StrCat()
”
Abseil Tip 218 FTADLE로 확장 지점 설계하기
제목: “이번 주의 팁 #218: FTADLE로 확장 지점 설계하기”
Abseil Tip 215 AbslStringify()를 사용한 사용자 정의 타입 문자열화"
제목: “이번 주의 팁 #215: AbslStringify()
를 사용한 사용자 정의 타입 문자열화”
Abseil Tip 198 태그 타입(Tag Types)
아래는 “이번 주의 팁 #198: 태그 타입(Tag Types)”에 대한 한글 번역입니다.
Abseil Tip 18 Substitute를 활용한 문자열 포맷팅
물론입니다! 아래는 번역된 내용입니다:
Abseil Tip 124 absl::StrFormat()
제목: “이번 주의 팁 #124: absl::StrFormat()
”
Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models
MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts
FFSplit: Split Feed-Forward Network For Optimizing Accuracy-Efficiency Trade-off in Language Model Inference
FlightLLM: Efficient Large Language Model Inference with a Complete Mapping Flow on FPGAs
Fast and Effective Weight Update for Pruned Large Language Models
Inference without Interference: Disaggregate LLM Inference for Mixed Downstream Workloads
MEDUSA: Simple LLMInference Acceleration Framework with Multiple Decoding Heads
DistServe: Disaggregating Prefill and Decoding for Goodput-optimized Large Language Model Serving
DeepSpeed-FastGen: High-throughput Text Generation for LLMs via MII and DeepSpeed-Inference
INFERFLOW: AN EFFICIENT AND HIGHLY CONFIG URABLE INFERENCE ENGINE FOR LARGE LANGUAGE MODELS
Abseil Tip 188 스마트 포인터를 함수 매개변수로 사용할 때 주의하세요
원문 게시: 2020년 12월 10일, 주간 팁 #188
Abseil Tip 187 std::unique_ptr Must Be Moved"
원문 게시: 2020년 11월 5일, 주간 팁 #187
Abseil Tip 186 함수는 무명 네임스페이스에 두는 것을 선호하세요
원문 게시: 2020년 11월 5일, 주간 팁 #186