๋ณธ๋ฌธ ๋ฐ”๋กœ๊ฐ€๊ธฐ

๐Ÿ˜ŽAI/Generative AI11

[Paper Review] Prompt-to-Prompt Image Editing with Cross Attention Control https://arxiv.org/abs/2208.01626 Prompt-to-Prompt Image Editing with Cross Attention ControlRecent large-scale text-driven synthesis models have attracted much attention thanks to their remarkable capabilities of generating highly diverse images that follow given text prompts. Such text-based synthesis methods are particularly appealing to humansarxiv.org   ๊ธฐ์กด LLI (Large-scale language-image) mo.. 2025. 2. 13.
[Paper Review] ๐Ÿ“ŒAttention Is All You Need (aka. Transformer) ๋“œ๋””์–ด ๋‚˜์˜ค์…จ์Šต๋‹ˆ๋‹ค. Transformer! ๐Ÿฅ ๋‘๋‘ฅํƒ!https://arxiv.org/abs/1706.03762 Attention Is All You NeedThe dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a newarxiv.org   "Attention Is All You Need" ๋…ผ๋ฌธ์ด ๋‚˜์˜ค๊ฒŒ ๋œ ๊ณ„๊ธฐ๋Š”.. 2025. 2. 11.
[Paper Review] Classifier-Free Diffusion Guidance https://arxiv.org/abs/2207.12598 Classifier-Free Diffusion GuidanceClassifier guidance is a recently introduced method to trade off mode coverage and sample fidelity in conditional diffusion models post training, in the same spirit as low temperature sampling or truncation in other types of generative models. Classifier garxiv.org  Introduce ์ด ๋…ผ๋ฌธ์€ classifier guidance ๋…ผ๋ฌธ์—์„œ classifier ์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š๊ณ ๋„ c.. 2025. 2. 5.
[Paper Review] High-Resolution Image Synthesis with Latent Diffusion Models (Aka. Stable Diffusion) https://arxiv.org/abs/2112.10752 High-Resolution Image Synthesis with Latent Diffusion ModelsBy decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism tarxiv.org ์ด๋ฒˆ ์ฃผ์ œ๋Š” ์•„์ฃผ ์œ ๋ช…ํ•œ Stable Diffuion ๋…ผ๋ฌธ์„ ๋ฆฌ๋ทฐํ•ด๋ณด๋„.. 2025. 2. 4.
[์ฝ”๋“œ๊ณต๋ถ€][Deepfake defection] SeqDeepFake ๋ณดํ˜ธ๋˜์–ด ์žˆ๋Š” ๊ธ€ ์ž…๋‹ˆ๋‹ค. 2023. 7. 20.
[Paper Review][Generative AI] SeqDeepFake: Detecting and Recovering Sequential DeepFake Manipulation SeqDeepFake: Detecting and Recovering Sequential DeepFake ManipulationS-Lab, ECCV 2022 ์ด ๋…ผ๋ฌธ์€ ๋”ฅํŽ˜์ดํฌ ๊ธฐ์ˆ ์„ ์‚ฌ์šฉํ•œ ๊ฐ€์งœ ์˜์ƒ ์กฐ์ž‘์„ ๊ฐ์ง€ํ•˜๊ณ  ๋ณต๊ตฌํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ๊นŠ์ด ์žˆ๋Š” ์—ฐ๊ตฌ๋ฅผ ์ˆ˜ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค.  ๋…ผ๋ฌธ์˜ ์ฃผ์š” ๋ชฉ์ ์€ ์‹œํ€€์…œ ๋”ฅํŽ˜์ดํฌ ์กฐ์ž‘(Detecting Sequential DeepFake Manipulation)์„ ๊ฐ์ง€ํ•˜๊ธฐ ์œ„ํ•œ ์ƒˆ๋กœ์šด ์—ฐ๊ตฌ ๋ฌธ์ œ๋ฅผ ์ œ์‹œํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ๋”ฅํŽ˜์ดํฌ ์กฐ์ž‘์€ ๋‹จ์ผ ๋‹จ๊ณ„์˜ ์กฐ์ž‘์„ ๊ฐ์ง€ํ•˜๋Š” ๊ฒƒ์— ์ดˆ์ ์„ ๋งž์ถ”๊ณ  ์žˆ์œผ๋‚˜, ์ตœ๊ทผ์—๋Š” ์–ผ๊ตด ์กฐ์ž‘ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ํ†ตํ•ด ๋‹ค๋‹จ๊ณ„ ์กฐ์ž‘์ด ๊ฐ€๋Šฅํ•ด์กŒ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์‹œํ€€์…œ ๋”ฅํŽ˜์ดํฌ ์กฐ์ž‘์€ ๊ธฐ์กด์˜ ๋ฐฉ๋ฒ•์œผ๋กœ๋Š” ๊ฐ์ง€ํ•˜๊ธฐ ์–ด๋ ค์šด ๋„์ „์ ์ธ ๋ฌธ์ œ๋ฅผ ์ œ์‹œํ•ฉ๋‹ˆ๋‹ค.  ๋…ผ๋ฌธ์—์„œ๋Š” ์‹œํ€€.. 2023. 7. 17.