분류 전체보기138 [Notable] 3DGS(3D gaussian splatting) 은 Differentiable 한가? ▶3DGS 는 Explicit Representation (명시적 표현) 방식이지만, 렌더링 과정이 미분 가능하게 설계되어 있어서 학습할 수 있습니다. ✅ Explicit Representation인데 왜 Differentiable 할까?보통 Explicit Representation(명시적 표현)은 3D 객체를 직접적으로 나타내는 방식이기 때문에, 미분이 어렵다고 생각할 수 있습니다.예를 들어:Point Cloud (점 클라우드) → 단순한 3D 좌표 집합이므로 미분이 어려움.Mesh (메쉬, 삼각형 기반 모델) → 버텍스(Vertex)와 페이스(Face)로 표현되며, 일반적인 경우 미분이 쉽지 않음.하지만 3DGS는 Gaussian Splatting을 사용하여 미분 가능(differentiable)한.. 2025. 2. 17. [Paper Review] Triplane Meets Gaussian Splatting: Fast and Generalizable Single-View 3D Reconstruction with Transformers https://arxiv.org/abs/2312.09147 Triplane Meets Gaussian Splatting: Fast and Generalizable Single-View 3D Reconstruction with TransformersRecent advancements in 3D reconstruction from single images have been driven by the evolution of generative models. Prominent among these are methods based on Score Distillation Sampling (SDS) and the adaptation of diffusion models in the 3D domain. Despitarxi.. 2025. 2. 17. [Paper Review] Prompt-to-Prompt Image Editing with Cross Attention Control https://arxiv.org/abs/2208.01626 Prompt-to-Prompt Image Editing with Cross Attention ControlRecent large-scale text-driven synthesis models have attracted much attention thanks to their remarkable capabilities of generating highly diverse images that follow given text prompts. Such text-based synthesis methods are particularly appealing to humansarxiv.org 기존 LLI (Large-scale language-image) mo.. 2025. 2. 13. [Paper Review] 📌Attention Is All You Need (aka. Transformer) 드디어 나오셨습니다. Transformer! 🥁 두둥탁!https://arxiv.org/abs/1706.03762 Attention Is All You NeedThe dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a newarxiv.org "Attention Is All You Need" 논문이 나오게 된 계기는.. 2025. 2. 11. [Paper Review] Classifier-Free Diffusion Guidance https://arxiv.org/abs/2207.12598 Classifier-Free Diffusion GuidanceClassifier guidance is a recently introduced method to trade off mode coverage and sample fidelity in conditional diffusion models post training, in the same spirit as low temperature sampling or truncation in other types of generative models. Classifier garxiv.org Introduce 이 논문은 classifier guidance 논문에서 classifier 을 사용하지 않고도 c.. 2025. 2. 5. [Notable] Low Temperature Samples Low Temperature Samples는 생성 모델에서 샘플 품질을 높이고 다양성을 줄이는 기법입니다.주로 확률 분포의 "샘플링 온도(temperature)"를 조절하여 생성 결과에 영향을 줍니다. 1. "Temperature"의 의미Temperature는 확률 분포의 "날카로움(sharpness)" 또는 "불확실성(uncertainty)"을 조절하는 하이퍼파라미터입니다.수학적으로는 소프트맥스(softmax) 함수에서 자주 사용됩니다:여기서:T = temperature (온도)zi = 로짓(logit) 값 (모델이 예측한 점수)P(xi) = 최종 확률 2. Temperature의 영향높은 온도 (T≫1)확률 분포가 **평평(flat)**해지고, 더 다양한 샘플이 생성됨모델이 불확실한 선택을 더 많이 .. 2025. 2. 5. 이전 1 2 3 4 ··· 23 다음