attention2 [Paper Review] Prompt-to-Prompt Image Editing with Cross Attention Control https://arxiv.org/abs/2208.01626 Prompt-to-Prompt Image Editing with Cross Attention ControlRecent large-scale text-driven synthesis models have attracted much attention thanks to their remarkable capabilities of generating highly diverse images that follow given text prompts. Such text-based synthesis methods are particularly appealing to humansarxiv.org 기존 LLI (Large-scale language-image) mo.. 2025. 2. 13. [Paper Review] 📌Attention Is All You Need (aka. Transformer) 드디어 나오셨습니다. Transformer! 🥁 두둥탁!https://arxiv.org/abs/1706.03762 Attention Is All You NeedThe dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a newarxiv.org "Attention Is All You Need" 논문이 나오게 된 계기는.. 2025. 2. 11. 이전 1 다음