https://huggingface.co/papers/2403.12687\n\n🌐 Github Page: https://showlab.github.io/magicanimate/\nšŸ“ Repository: https://elenaryumina.github.io/AVCER","text":"šŸ“… Conference: CVPRW, Jun 17-21, 2024 | Seattle WA, USA šŸ‡ŗšŸ‡ø\n\nšŸ“„ Paper: https://huggingface.co/papers/2403.12687\n\n🌐 Github Page: https://showlab.github.io/magicanimate/\nšŸ“ Repository: https://elenaryumina.github.io/AVCER"},"id":"2403.12687","title":"Audio-Visual Compound Expression Recognition Method based on Late\n Modality Fusion and Rule-based Decision","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2403.12687.png","upvotes":3,"publishedAt":"2024-03-19T12:45:52.000Z","isUpvotedByUser":false},{"_id":"6630a5c36fb9755a4c520d5f","position":2,"gallery":["https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/QckII6LXucq-sbMOB3gV_.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/wenUmUNLPIIte_viun8Gg.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/4f-IyEqFQpntzU998XLcv.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/3cGqMUsp4I18WOhLaVJR2.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/ILF2y4Ba0BiJNDi8tl2MP.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/Kb5EqJSNDUAsz2ApgW0SK.png"],"type":"paper","note":{"html":"šŸ“„ Paper: https://huggingface.co/papers/2404.17858","text":"šŸ“„ Paper: https://huggingface.co/papers/2404.17858"},"id":"2404.17858","title":"Revisiting Multi-modal Emotion Learning with Broad State Space Models\n and Probability-guidance Fusion","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2404.17858.png","upvotes":0,"publishedAt":"2024-04-27T10:22:03.000Z","isUpvotedByUser":false},{"_id":"663e637b3f8b9fd9e7715fb5","position":3,"gallery":["https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/1Rg1WkJAF2GErWJjV8o1P.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/MBNLWLbbAyGuktcRHIT3x.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/aXrzVqZNHlNq6zqIDpzaJ.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/4zEjdRM5xMVa3y5nnHCHV.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/tbiFh-hmwmRssyJeA63PJ.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/2zzg0sbJqQgIlWWAiSatd.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/35_jL61AYqoknsAVnG445.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/5iUh2R4uVZUzUelbcnp14.png"],"type":"paper","note":{"html":"šŸ“® Post: https://huggingface.co/posts/DmitryRyumin/537495823373219\n\nšŸ“„ Paper: https://huggingface.co/papers/2405.01828\n\nšŸ“ Repository: https://github.com/SwjtuMa/FER-YOLO-Mamba","text":"šŸ“® Post: https://huggingface.co/posts/DmitryRyumin/537495823373219\n\nšŸ“„ Paper: https://huggingface.co/papers/2405.01828\n\nšŸ“ Repository: https://github.com/SwjtuMa/FER-YOLO-Mamba"},"id":"2405.01828","title":"FER-YOLO-Mamba: Facial Expression Detection and Classification Based on\n Selective State Space","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2405.01828.png","upvotes":1,"publishedAt":"2024-05-03T03:20:37.000Z","isUpvotedByUser":false}],"position":0,"theme":"blue","private":false,"shareUrl":"https://huggingface.co/collections/DmitryRyumin/facial-expressions-recognition-65f22574e0724601636ddaf7","upvotes":6,"isUpvotedByUser":false},{"slug":"DmitryRyumin/big-five-personality-traits-661fb545292ab3d12a5a4890","title":"šŸ¤— Big Five Personality Traits","description":"The latest AI technologies usher in a new era of Big Five personality assessment šŸš€","lastUpdated":"2024-05-01T07:54:14.858Z","owner":{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg","fullname":"Dmitry Ryumin","name":"DmitryRyumin","type":"user","isPro":false,"isHf":false},"items":[{"_id":"661fb545292ab3d12a5a4891","position":0,"gallery":["https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/GGA058AuO6ZNcUlyrM7dy.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/-z7b_oq-BlGvFJUQBcqO9.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/Yrl3pi5xctBWM_ttxL15u.png"],"type":"space","note":{"html":"šŸ“® Post: https://huggingface.co/posts/DmitryRyumin/232666654143124\n\nšŸ“„ Paper: Coming soon.\n\nšŸ¤— Demo: https://huggingface.co/spaces/ElenaRyumina/OCEANAI\n\nšŸ“ Repository: https://github.com/aimclub/OCEANAI","text":"šŸ“® Post: https://huggingface.co/posts/DmitryRyumin/232666654143124\n\nšŸ“„ Paper: Coming soon.\n\nšŸ¤— Demo: https://huggingface.co/spaces/ElenaRyumina/OCEANAI\n\nšŸ“ Repository: https://github.com/aimclub/OCEANAI"},"author":"ElenaRyumina","authorData":{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/yQDxSx0Il0jwma_u2UyJe.jpeg","fullname":"Elena Ryumina","name":"ElenaRyumina","type":"user","isPro":false,"isHf":false},"colorFrom":"gray","colorTo":"red","createdAt":"2024-03-28T17:15:33.000Z","emoji":"šŸ˜€šŸ¤“šŸ˜ŽšŸ˜‰šŸ˜¤","id":"ElenaRyumina/OCEANAI","lastModified":"2024-04-15T20:35:27.000Z","likes":15,"pinned":false,"private":false,"repoType":"space","runtime":{"stage":"RUNNING","hardware":{"current":"cpu-basic","requested":"cpu-basic"},"storage":null,"gcTimeout":172800,"replicas":{"current":1,"requested":1},"devMode":false,"domains":[{"domain":"elenaryumina-oceanai.hf.space","isCustom":false,"stage":"READY"}]},"shortDescription":"Tool to detect personality traits and automate HR-processes","title":"OCEANAI","isLikedByUser":false},{"_id":"661fb9bc5dff70c00e9ca4da","position":1,"gallery":["https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/6-Ux2iAsyArGbbAfjOFaz.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/EstzTiv6L9-SQ9ur-8dUh.png"],"type":"paper","note":{"html":"šŸ“® Post: https://huggingface.co/posts/DmitryRyumin/321064754684797\n\nšŸ“… Conference: LREC-COLING, May 20-25, 2024 | Torino, Italia šŸ‡®šŸ‡¹\n\nšŸ“„ Paper: https://huggingface.co/papers/2404.00930","text":"šŸ“® Post: https://huggingface.co/posts/DmitryRyumin/321064754684797\n\nšŸ“… Conference: LREC-COLING, May 20-25, 2024 | Torino, Italia šŸ‡®šŸ‡¹\n\nšŸ“„ Paper: https://huggingface.co/papers/2404.00930"},"id":"2404.00930","title":"PSYDIAL: Personality-based Synthetic Dialogue Generation using Large\n Language Models","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2404.00930.png","upvotes":0,"publishedAt":"2024-04-01T05:19:34.000Z","isUpvotedByUser":false},{"_id":"661fc33cd0eb50531af69ac9","position":2,"gallery":["https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/VaHAUlVVmsycg7NyVe1Hb.png"],"type":"paper","note":{"html":"šŸ“„ Paper: https://huggingface.co/papers/2404.07084","text":"šŸ“„ Paper: https://huggingface.co/papers/2404.07084"},"id":"2404.07084","title":"Dynamic Generation of Personalities with Large Language Models","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2404.07084.png","upvotes":0,"publishedAt":"2024-04-10T15:17:17.000Z","isUpvotedByUser":false},{"_id":"6625876433d8322756b27532","position":3,"gallery":["https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/jd4rdyAwQuUefVfh3X44l.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/frfUuk8z0rQXP3lxI9aMk.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/_7wUXDksY_Ixayt5WbjOx.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/2RT6KXhhesIe_KYpENYTw.png"],"type":"paper","note":{"html":"šŸ“® Post: https://huggingface.co/posts/DmitryRyumin/665758617674402\n\nšŸ“… Conference: NAACL, June 16–21, 2024 | Mexico City, Mexico šŸ‡²šŸ‡½\n\nšŸ“„ Paper: https://huggingface.co/papers/2305.02547\n\nšŸ“ Repository: https://github.com/hjian42/PersonaLLM","text":"šŸ“® Post: https://huggingface.co/posts/DmitryRyumin/665758617674402\n\nšŸ“… Conference: NAACL, June 16–21, 2024 | Mexico City, Mexico šŸ‡²šŸ‡½\n\nšŸ“„ Paper: https://huggingface.co/papers/2305.02547\n\nšŸ“ Repository: https://github.com/hjian42/PersonaLLM"},"id":"2305.02547","title":"PersonaLLM: Investigating the Ability of Large Language Models to\n Express Personality Traits","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2305.02547.png","upvotes":5,"publishedAt":"2023-05-04T04:58:00.000Z","isUpvotedByUser":false}],"position":1,"theme":"orange","private":false,"shareUrl":"https://huggingface.co/collections/DmitryRyumin/big-five-personality-traits-661fb545292ab3d12a5a4890","upvotes":2,"isUpvotedByUser":false},{"slug":"DmitryRyumin/avatars-65df37cdf81fec13d4dbac36","title":"šŸŽ­ Avatars","description":"The latest AI-powered technologies usher in a new era of realistic avatars! šŸš€","lastUpdated":"2024-05-14T12:54:57.314Z","owner":{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg","fullname":"Dmitry Ryumin","name":"DmitryRyumin","type":"user","isPro":false,"isHf":false},"items":[{"_id":"65df37cdf81fec13d4dbac37","position":0,"gallery":["https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/XNFt1n0s84uC4scAyOlcR.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/kBo86JdfDXJhL9XGoc15V.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/5CnScj5FUlgGNj8G4yw4T.gif"],"type":"paper","note":{"html":"šŸ“® Post: https://huggingface.co/posts/DmitryRyumin/146398971160140\n\nšŸ“„ Paper: https://huggingface.co/papers/2402.17485\n\n🌐 Github Page: https://humanaigc.github.io/emote-portrait-alive\nšŸ“ Repository: https://github.com/HumanAIGC/EMO","text":"šŸ“® Post: https://huggingface.co/posts/DmitryRyumin/146398971160140\n\nšŸ“„ Paper: https://huggingface.co/papers/2402.17485\n\n🌐 Github Page: https://humanaigc.github.io/emote-portrait-alive\nšŸ“ Repository: https://github.com/HumanAIGC/EMO"},"id":"2402.17485","title":"EMO: Emote Portrait Alive - Generating Expressive Portrait Videos with\n Audio2Video Diffusion Model under Weak Conditions","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2402.17485.png","upvotes":182,"publishedAt":"2024-02-27T13:10:11.000Z","isUpvotedByUser":false},{"_id":"65df3e9b03841b3511b7ff78","position":1,"gallery":["https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/eGvbmaZO-5_8Rvtky4Cm_.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/ZK_zZOA5COTGuDPYyth2p.png"],"type":"paper","note":{"html":"šŸ“„ Paper: https://huggingface.co/papers/2312.01841\n\n🌐 Github Page: https://humanaigc.github.io/vivid-talk\nšŸ“ Repository: https://github.com/HumanAIGC/VividTalk\n\nšŸ“ŗ Video: https://www.youtube.com/watch?v=lJVzt7JCe_4","text":"šŸ“„ Paper: https://huggingface.co/papers/2312.01841\n\n🌐 Github Page: https://humanaigc.github.io/vivid-talk\nšŸ“ Repository: https://github.com/HumanAIGC/VividTalk\n\nšŸ“ŗ Video: https://www.youtube.com/watch?v=lJVzt7JCe_4"},"id":"2312.01841","title":"VividTalk: One-Shot Audio-Driven Talking Head Generation Based on 3D\n Hybrid Prior","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2312.01841.png","upvotes":1,"publishedAt":"2023-12-04T12:25:37.000Z","isUpvotedByUser":false},{"_id":"65e34ab3cacb24555363147b","position":2,"gallery":["https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/GVB2RmKZ_uX6CUMsKt051.jpeg","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/z_e2tk2wOgMMaKvTvVNN5.gif","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/b6u6KIm1k7uMBEZIaZw01.gif","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/4W5eawL12z2hYfTv2MZdB.gif"],"type":"paper","note":{"html":"šŸ“® Post: https://huggingface.co/posts/DmitryRyumin/578997477674932\n\nšŸ“… Conference: CVPR, Jun 17-21, 2024 | Seattle WA, USA šŸ‡ŗšŸ‡ø\n\nšŸ¤— Demo: https://huggingface.co/spaces/zcxu-eric/magicanimate\n\nšŸ“„ Paper: https://huggingface.co/papers/2311.16498\n\n🌐 Github Page: https://showlab.github.io/magicanimate/\nšŸ“ Repository: https://github.com/magic-research/magic-animate\n\nšŸ”„ Model šŸ¤–: https://huggingface.co/zcxu-eric/MagicAnimate","text":"šŸ“® Post: https://huggingface.co/posts/DmitryRyumin/578997477674932\n\nšŸ“… Conference: CVPR, Jun 17-21, 2024 | Seattle WA, USA šŸ‡ŗšŸ‡ø\n\nšŸ¤— Demo: https://huggingface.co/spaces/zcxu-eric/magicanimate\n\nšŸ“„ Paper: https://huggingface.co/papers/2311.16498\n\n🌐 Github Page: https://showlab.github.io/magicanimate/\nšŸ“ Repository: https://github.com/magic-research/magic-animate\n\nšŸ”„ Model šŸ¤–: https://huggingface.co/zcxu-eric/MagicAnimate"},"id":"2311.16498","title":"MagicAnimate: Temporally Consistent Human Image Animation using\n Diffusion Model","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2311.16498.png","upvotes":1,"publishedAt":"2023-11-27T18:32:31.000Z","isUpvotedByUser":false},{"_id":"65e62a32a6754ba7104adb9d","position":3,"gallery":["https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/ltzoWwqDoYi7P0E1Pd-ix.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/JXHtWzmj3k8LpLFC-AQ8p.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/NNJh9w9Cw0PJ4uGr8JG1T.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/KcwB3QTZ0_WHPNT1AtamY.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/1jCFZg9WBpuhpVCjBGvXg.gif"],"type":"paper","note":{"html":"šŸ“® Post: https://huggingface.co/posts/DmitryRyumin/635360328098616\n\nšŸ“… Conference: CVPR, Jun 17-21, 2024 | Seattle WA, USA šŸ‡ŗšŸ‡ø\n\nšŸ“„ Paper: https://huggingface.co/papers/2312.02134\n\n🌐 Github Page: https://huliangxiao.github.io/GaussianAvatar\nšŸ“ Repository: https://github.com/huliangxiao/GaussianAvatar\n\nšŸ“ŗ Video: https://www.youtube.com/watch?v=a4g8Z9nCF-k","text":"šŸ“® Post: https://huggingface.co/posts/DmitryRyumin/635360328098616\n\nšŸ“… Conference: CVPR, Jun 17-21, 2024 | Seattle WA, USA šŸ‡ŗšŸ‡ø\n\nšŸ“„ Paper: https://huggingface.co/papers/2312.02134\n\n🌐 Github Page: https://huliangxiao.github.io/GaussianAvatar\nšŸ“ Repository: https://github.com/huliangxiao/GaussianAvatar\n\nšŸ“ŗ Video: https://www.youtube.com/watch?v=a4g8Z9nCF-k"},"id":"2312.02134","title":"GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single\n Video via Animatable 3D Gaussians","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2312.02134.png","upvotes":2,"publishedAt":"2023-12-04T18:55:45.000Z","isUpvotedByUser":false}],"position":2,"theme":"green","private":false,"shareUrl":"https://huggingface.co/collections/DmitryRyumin/avatars-65df37cdf81fec13d4dbac36","upvotes":48,"isUpvotedByUser":false},{"slug":"DmitryRyumin/llm-spaces-6616f00163b8a8054f171c48","title":"šŸ¤– LLM Spaces","description":"A collection of applications demonstrating large language models (LLMs) šŸš€","lastUpdated":"2024-05-06T17:11:29.695Z","owner":{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg","fullname":"Dmitry Ryumin","name":"DmitryRyumin","type":"user","isPro":false,"isHf":false},"items":[{"_id":"6616f00163b8a8054f171c49","position":0,"type":"space","note":{"html":"šŸ”„ Model šŸ¤–: https://huggingface.co/CohereForAI/c4ai-command-r-plus","text":"šŸ”„ Model šŸ¤–: https://huggingface.co/CohereForAI/c4ai-command-r-plus"},"author":"CohereForAI","authorData":{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1678549441248-5e70f6048ce3c604d78fe133.png","fullname":"Cohere For AI","name":"CohereForAI","type":"org","isHf":false,"isEnterprise":false},"colorFrom":"purple","colorTo":"red","createdAt":"2024-04-03T09:57:59.000Z","emoji":"🌟","id":"CohereForAI/c4ai-command-r-plus","lastModified":"2024-04-18T08:03:04.000Z","likes":776,"pinned":false,"private":false,"repoType":"space","runtime":{"stage":"RUNNING","hardware":{"current":"cpu-upgrade","requested":"cpu-upgrade"},"storage":null,"gcTimeout":null,"replicas":{"current":1,"requested":1},"devMode":false,"domains":[{"domain":"cohereforai-c4ai-command-r-plus.hf.space","isCustom":false,"stage":"READY"}]},"title":"C4AI Command R Plus","isLikedByUser":false},{"_id":"661c57a4bcd78151e5b1c715","position":1,"type":"space","note":{"html":"šŸ”„ Model šŸ¤–: https://huggingface.co/Qwen/Qwen1.5-72B-Chat","text":"šŸ”„ Model šŸ¤–: https://huggingface.co/Qwen/Qwen1.5-72B-Chat"},"author":"Qwen","authorData":{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62088594a5943c8a8fc94560/y5SEKiE8TkjBKs9xfjCx5.png","fullname":"Qwen","name":"Qwen","type":"org","isHf":false,"isEnterprise":false},"colorFrom":"pink","colorTo":"purple","createdAt":"2024-02-05T12:13:08.000Z","emoji":"šŸš€","id":"Qwen/Qwen1.5-72B-Chat","lastModified":"2024-02-06T06:37:39.000Z","likes":362,"pinned":false,"private":false,"repoType":"space","runtime":{"stage":"RUNNING","hardware":{"current":"cpu-basic","requested":"cpu-basic"},"storage":null,"gcTimeout":172800,"replicas":{"current":1,"requested":1},"devMode":false,"domains":[{"domain":"qwen-qwen1-5-72b-chat.hf.space","isCustom":false,"stage":"READY"}]},"title":"Qwen1.5 72B Chat","isLikedByUser":false},{"_id":"661b0b14b8a37b469c78eb10","position":2,"type":"space","note":{"html":"šŸ”„ Model šŸ¤–: https://huggingface.co/Qwen/Qwen1.5-32B-Chat","text":"šŸ”„ Model šŸ¤–: https://huggingface.co/Qwen/Qwen1.5-32B-Chat"},"author":"Qwen","authorData":{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62088594a5943c8a8fc94560/y5SEKiE8TkjBKs9xfjCx5.png","fullname":"Qwen","name":"Qwen","type":"org","isHf":false,"isEnterprise":false},"colorFrom":"blue","colorTo":"purple","createdAt":"2024-04-02T14:06:22.000Z","emoji":"šŸ¢","id":"Qwen/Qwen1.5-32B-Chat-demo","lastModified":"2024-04-06T17:11:56.000Z","likes":115,"pinned":false,"private":false,"repoType":"space","runtime":{"stage":"RUNNING","hardware":{"current":"cpu-basic","requested":"cpu-basic"},"storage":null,"gcTimeout":172800,"replicas":{"current":1,"requested":1},"devMode":false,"domains":[{"domain":"qwen-qwen1-5-32b-chat-demo.hf.space","isCustom":false,"stage":"READY"}]},"title":"Qwen1.5 32B Chat","isLikedByUser":false},{"_id":"661979737cfb7bcb309d2d99","position":3,"type":"space","note":{"html":"šŸ”„ Model šŸ¤–: https://huggingface.co/HuggingFaceH4/starchat2-15b-v0.1","text":"šŸ”„ Model šŸ¤–: https://huggingface.co/HuggingFaceH4/starchat2-15b-v0.1"},"author":"HuggingFaceH4","authorData":{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5f0c746619cb630495b814fd/j26aNEdiOgptZxJ6akGCC.png","fullname":"Hugging Face H4","name":"HuggingFaceH4","type":"org","isHf":false,"isEnterprise":true},"colorFrom":"indigo","colorTo":"blue","createdAt":"2024-03-11T08:56:36.000Z","emoji":"🌟","id":"HuggingFaceH4/starchat2-playground","lastModified":"2024-03-12T15:07:44.000Z","likes":132,"pinned":true,"private":false,"repoType":"space","runtime":{"stage":"RUNNING","hardware":{"current":"a100-large","requested":"a100-large"},"storage":null,"gcTimeout":86400,"replicas":{"current":1,"requested":1},"devMode":false,"domains":[{"domain":"huggingfaceh4-starchat2-playground.hf.space","isCustom":false,"stage":"READY"}]},"title":"StarChat2 Demo","isLikedByUser":false}],"position":3,"theme":"indigo","private":false,"shareUrl":"https://huggingface.co/collections/DmitryRyumin/llm-spaces-6616f00163b8a8054f171c48","upvotes":6,"isUpvotedByUser":false},{"slug":"DmitryRyumin/speech-enhancement-65de31e1b6d9a040c151702e","title":"šŸ”Š Speech Enhancement","description":"Unlocking a new era in Speech Enhancement, powered by the latest AI technologies, for superior audio quality improvements! šŸš€","lastUpdated":"2024-05-01T07:54:14.859Z","owner":{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg","fullname":"Dmitry Ryumin","name":"DmitryRyumin","type":"user","isPro":false,"isHf":false},"items":[{"_id":"65e1af8300318bc2b4c79d4a","position":2,"gallery":["https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/Z8P4_HZkwabBqNykK1rNL.png"],"type":"paper","note":{"html":"šŸ“… Conference: ICASSP, 14-19 April 2024 | Seoul, Korea šŸ‡°šŸ‡·\n\nšŸ“„ Paper: https://huggingface.co/papers/2402.01808","text":"šŸ“… Conference: ICASSP, 14-19 April 2024 | Seoul, Korea šŸ‡°šŸ‡·\n\nšŸ“„ Paper: https://huggingface.co/papers/2402.01808"},"id":"2402.01808","title":"KS-Net: Multi-band joint speech restoration and enhancement network for\n 2024 ICASSP SSI Challenge","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2402.01808.png","upvotes":1,"publishedAt":"2024-02-02T11:28:18.000Z","isUpvotedByUser":false},{"_id":"65ef8901de6ea6bedc20a8ec","position":3,"gallery":["https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/3FnrRaAKgOifxTHG3ahRW.png"],"type":"paper","note":{"html":"šŸ“… Conference: ICASSP, 14-19 April 2024 | Seoul, Korea šŸ‡°šŸ‡·\n\nšŸ“„ Paper: https://huggingface.co/papers/2309.10450\n\nšŸ“ Repository: https://github.com/joanne-b-nortier/UDiffSE","text":"šŸ“… Conference: ICASSP, 14-19 April 2024 | Seoul, Korea šŸ‡°šŸ‡·\n\nšŸ“„ Paper: https://huggingface.co/papers/2309.10450\n\nšŸ“ Repository: https://github.com/joanne-b-nortier/UDiffSE"},"id":"2309.10450","title":"Unsupervised speech enhancement with diffusion-based generative models","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2309.10450.png","upvotes":1,"publishedAt":"2023-09-19T09:11:31.000Z","isUpvotedByUser":false},{"_id":"65ef8ab36cc4fab697fc97a6","position":4,"gallery":["https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/LukvdhKUoQscgnOLuAGdO.png"],"type":"paper","note":{"html":"šŸ“… Conference: ICASSP, 14-19 April 2024 | Seoul, Korea šŸ‡°šŸ‡·\n\nšŸ“„ Paper: https://huggingface.co/papers/2309.10457","text":"šŸ“… Conference: ICASSP, 14-19 April 2024 | Seoul, Korea šŸ‡°šŸ‡·\n\nšŸ“„ Paper: https://huggingface.co/papers/2309.10457"},"id":"2309.10457","title":"Diffusion-based speech enhancement with a weighted generative-supervised\n learning loss","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2309.10457.png","upvotes":1,"publishedAt":"2023-09-19T09:13:35.000Z","isUpvotedByUser":false},{"_id":"65ef8c61b3be8515fbce51f3","position":5,"gallery":["https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/CUYgOGrufIE8HUOrn4RKA.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/xwG1sblDcejMuSHfgeff8.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/m00CcjiVvwRhadfnALL69.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/aRMjmJ-d58tQO-RmApcnG.png"],"type":"paper","note":{"html":"šŸ“… Conference: INTERSPEECH, 18-22 September 2022 | Incheon, Korea šŸ‡°šŸ‡·\n\nšŸ“„ Paper: https://huggingface.co/papers/2203.17004\n\n🌐 Web Page: https://www.inf.uni-hamburg.de/en/inst/ab/sp/publications/sgmse\nšŸ“ Repository: https://github.com/sp-uhh/sgmse","text":"šŸ“… Conference: INTERSPEECH, 18-22 September 2022 | Incheon, Korea šŸ‡°šŸ‡·\n\nšŸ“„ Paper: https://huggingface.co/papers/2203.17004\n\n🌐 Web Page: https://www.inf.uni-hamburg.de/en/inst/ab/sp/publications/sgmse\nšŸ“ Repository: https://github.com/sp-uhh/sgmse"},"id":"2203.17004","title":"Speech Enhancement with Score-Based Generative Models in the Complex\n STFT Domain","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2203.17004.png","upvotes":0,"publishedAt":"2022-03-31T12:53:47.000Z","isUpvotedByUser":false}],"position":4,"theme":"indigo","private":false,"shareUrl":"https://huggingface.co/collections/DmitryRyumin/speech-enhancement-65de31e1b6d9a040c151702e","upvotes":7,"isUpvotedByUser":false},{"slug":"DmitryRyumin/image-enhancement-65ee1cd2fe1c0c877ae55d28","title":"šŸ–¼ļø Image Enhancement","description":"Embrace the future of Image Enhancement with the latest AI-powered technologies! šŸš€","lastUpdated":"2024-05-01T07:54:14.860Z","owner":{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg","fullname":"Dmitry Ryumin","name":"DmitryRyumin","type":"user","isPro":false,"isHf":false},"items":[{"_id":"65ee1cd2fe1c0c877ae55d29","position":0,"gallery":["https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/Of72aol2jF45fq9mgrZs6.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/TkfaVmZUPf6HNhYOP9TZw.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/_aBOXq9dNyrdo9fVbHxp6.png","https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/f64nLKcf1LDHVX4gZbA_-.png"],"type":"paper","note":{"html":"šŸ“® Post: https://huggingface.co/posts/DmitryRyumin/818418428056695\n\nšŸ“… Conference: CVPR, Jun 17-21, 2024 | Seattle WA, USA šŸ‡ŗšŸ‡ø\n\nšŸ“„ Paper: https://huggingface.co/papers/2402.19289\n\nšŸ“ Repository: https://github.com/icandle/CAMixerSR","text":"šŸ“® Post: https://huggingface.co/posts/DmitryRyumin/818418428056695\n\nšŸ“… Conference: CVPR, Jun 17-21, 2024 | Seattle WA, USA šŸ‡ŗšŸ‡ø\n\nšŸ“„ Paper: https://huggingface.co/papers/2402.19289\n\nšŸ“ Repository: https://github.com/icandle/CAMixerSR"},"id":"2402.19289","title":"CAMixerSR: Only Details Need More \"Attention\"","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2402.19289.png","upvotes":1,"publishedAt":"2024-02-29T15:52:59.000Z","isUpvotedByUser":false}],"position":5,"theme":"orange","private":false,"shareUrl":"https://huggingface.co/collections/DmitryRyumin/image-enhancement-65ee1cd2fe1c0c877ae55d28","upvotes":5,"isUpvotedByUser":false}],"datasets":[],"hasMoreActivities":false,"models":[],"numberLikes":55,"papers":[{"id":"2403.12687","title":"Audio-Visual Compound Expression Recognition Method based on Late\n Modality Fusion and Rule-based Decision","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2403.12687.png","upvotes":3,"publishedAt":"2024-03-19T12:45:52.000Z","isUpvotedByUser":false}],"posts":[{"slug":"182984250825015","content":[{"type":"text","value":"šŸš€šŸ‘•šŸŒŸ New Research Alert - SIGGRAPH 2024 (Avatars Collection)! šŸŒŸšŸ‘ššŸš€","raw":"šŸš€šŸ‘•šŸŒŸ New Research Alert - SIGGRAPH 2024 (Avatars Collection)! šŸŒŸšŸ‘ššŸš€"},{"type":"new_line","raw":"\n"},{"type":"text","value":"šŸ“„ Title: LayGA: Layered Gaussian Avatars for Animatable Clothing Transfer šŸ”","raw":"šŸ“„ Title: LayGA: Layered Gaussian Avatars for Animatable Clothing Transfer šŸ”"},{"type":"new_line","raw":"\n"},{"type":"new_line","raw":"\n"},{"type":"text","value":"šŸ“ Description: LayGA is a novel method for animatable clothing transfer that separates the body and clothing into two layers for improved photorealism and accurate clothing tracking, outperforming existing methods.","raw":"šŸ“ Description: LayGA is a novel method for animatable clothing transfer that separates the body and clothing into two layers for improved photorealism and accurate clothing tracking, outperforming existing methods."},{"type":"new_line","raw":"\n"},{"type":"new_line","raw":"\n"},{"type":"text","value":"šŸ‘„ Authors: Siyou Lin, Zhe Li, Zhaoqi Su, Zerong Zheng, Hongwen Zhang, and Yebin Liu","raw":"šŸ‘„ Authors: Siyou Lin, Zhe Li, Zhaoqi Su, Zerong Zheng, Hongwen Zhang, and Yebin Liu"},{"type":"new_line","raw":"\n"},{"type":"new_line","raw":"\n"},{"type":"text","value":"šŸ“… Conference: SIGGRAPH, 28 Jul – 1 Aug, 2024 | Denver CO, USA šŸ‡ŗšŸ‡ø","raw":"šŸ“… Conference: SIGGRAPH, 28 Jul – 1 Aug, 2024 | Denver CO, USA šŸ‡ŗšŸ‡ø"},{"type":"new_line","raw":"\n"},{"type":"new_line","raw":"\n"},{"type":"text","value":"šŸ“„ Paper: ","raw":"šŸ“„ Paper: "},{"type":"resource","resource":{"type":"paper","id":"2405.07319"},"url":"https://huggingface.co/papers/2405.07319","raw":"https://huggingface.co/papers/2405.07319","label":"LayGA: Layered Gaussian Avatars for Animatable Clothing Transfer (2405.07319)"},{"type":"new_line","raw":"\n"},{"type":"new_line","raw":"\n"},{"type":"text","value":"🌐 Github Page: ","raw":"🌐 Github Page: "},{"type":"link","href":"https://jsnln.github.io/layga/index.html","raw":"https://jsnln.github.io/layga/index.html"},{"type":"new_line","raw":"\n"},{"type":"new_line","raw":"\n"},{"type":"text","value":"šŸ“š More Papers: more cutting-edge research presented at other conferences in the ","raw":"šŸ“š More Papers: more cutting-edge research presented at other conferences in the "},{"type":"resource","resource":{"type":"space","id":"DmitryRyumin/NewEraAI-Papers"},"url":"https://huggingface.co/spaces/DmitryRyumin/NewEraAI-Papers","raw":"https://huggingface.co/spaces/DmitryRyumin/NewEraAI-Papers"},{"type":"text","value":" curated by ","raw":" curated by "},{"type":"mention","user":"DmitryRyumin","raw":"@DmitryRyumin"},{"type":"new_line","raw":"\n"},{"type":"new_line","raw":"\n"},{"type":"text","value":"šŸš€ Added to the Avatars Collection: ","raw":"šŸš€ Added to the Avatars Collection: "},{"type":"resource","resource":{"type":"collection","id":"DmitryRyumin/avatars-65df37cdf81fec13d4dbac36"},"url":"https://huggingface.co/collections/DmitryRyumin/avatars-65df37cdf81fec13d4dbac36","raw":"https://huggingface.co/collections/DmitryRyumin/avatars-65df37cdf81fec13d4dbac36"},{"type":"new_line","raw":"\n"},{"type":"new_line","raw":"\n"},{"type":"text","value":"šŸ” Keywords: #LayGA #AnimatableClothingTransfer #VirtualTryOn #AvatarTechnology #SIGGRAPH2024 #ComputerGraphics #DeepLearning #ComputerVision #Innovation","raw":"šŸ” Keywords: #LayGA #AnimatableClothingTransfer #VirtualTryOn #AvatarTechnology #SIGGRAPH2024 #ComputerGraphics #DeepLearning #ComputerVision #Innovation"}],"rawContent":"šŸš€šŸ‘•šŸŒŸ New Research Alert - SIGGRAPH 2024 (Avatars Collection)! šŸŒŸšŸ‘ššŸš€\nšŸ“„ Title: LayGA: Layered Gaussian Avatars for Animatable Clothing Transfer šŸ”\n\nšŸ“ Description: LayGA is a novel method for animatable clothing transfer that separates the body and clothing into two layers for improved photorealism and accurate clothing tracking, outperforming existing methods.\n\nšŸ‘„ Authors: Siyou Lin, Zhe Li, Zhaoqi Su, Zerong Zheng, Hongwen Zhang, and Yebin Liu\n\nšŸ“… Conference: SIGGRAPH, 28 Jul – 1 Aug, 2024 | Denver CO, USA šŸ‡ŗšŸ‡ø\n\nšŸ“„ Paper: https://huggingface.co/papers/2405.07319\n\n🌐 Github Page: https://jsnln.github.io/layga/index.html\n\nšŸ“š More Papers: more cutting-edge research presented at other conferences in the https://huggingface.co/spaces/DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin\n\nšŸš€ Added to the Avatars Collection: https://huggingface.co/collections/DmitryRyumin/avatars-65df37cdf81fec13d4dbac36\n\nšŸ” Keywords: #LayGA #AnimatableClothingTransfer #VirtualTryOn #AvatarTechnology #SIGGRAPH2024 #ComputerGraphics #DeepLearning #ComputerVision #Innovation","author":{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg","fullname":"Dmitry Ryumin","name":"DmitryRyumin","type":"user","isPro":false,"isHf":false,"isFollowing":false},"attachments":[{"type":"image","url":"https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/eXPh-ANxcjpS17IpWB2UD.gif"},{"type":"image","url":"https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/PkImD8pxyTqp2F1Oc9tFn.gif"},{"type":"image","url":"https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/02paJFhQf9G7TPAkl85JP.png"},{"type":"image","url":"https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/Ht4HNmKhe4OeVFzLO_KY3.png"},{"type":"image","url":"https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/i0FPzgOEvAoy_Y9bals4r.png"}],"mentions":[{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg","fullname":"Dmitry Ryumin","name":"DmitryRyumin","type":"user","isPro":false,"isHf":false}],"reactions":[{"reaction":"šŸ”„","users":["DmitryRyumin","abdulmoeedirshad"],"count":2}],"publishedAt":"2024-05-14T12:52:44.000Z","updatedAt":"2024-05-14T12:54:24.025Z","commentators":[],"url":"/posts/DmitryRyumin/182984250825015","totalUniqueImpressions":543,"numComments":0},{"slug":"698577433443867","content":[{"type":"text","value":"šŸš€šŸŽ­šŸŒŸ New Research Alert - AniTalker (Avatars Collection)! šŸŒŸšŸŽ­šŸš€","raw":"šŸš€šŸŽ­šŸŒŸ New Research Alert - AniTalker (Avatars Collection)! šŸŒŸšŸŽ­šŸš€"},{"type":"new_line","raw":"\n"},{"type":"text","value":"šŸ“„ Title: AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding šŸ”","raw":"šŸ“„ Title: AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding šŸ”"},{"type":"new_line","raw":"\n"},{"type":"new_line","raw":"\n"},{"type":"text","value":"šŸ“ Description: AniTalker is a new framework that transforms a single static portrait and a single input audio file into animated, talking videos with natural, fluid movements.","raw":"šŸ“ Description: AniTalker is a new framework that transforms a single static portrait and a single input audio file into animated, talking videos with natural, fluid movements."},{"type":"new_line","raw":"\n"},{"type":"new_line","raw":"\n"},{"type":"text","value":"šŸ‘„ Authors: Tao Liu, Feilong Chen, Shuai Fan, ","raw":"šŸ‘„ Authors: Tao Liu, Feilong Chen, Shuai Fan, "},{"type":"mention","user":"cpdu","raw":"@cpdu"},{"type":"text","value":", Qi Chen, Xie Chen, and Kai Yu","raw":", Qi Chen, Xie Chen, and Kai Yu"},{"type":"new_line","raw":"\n"},{"type":"new_line","raw":"\n"},{"type":"text","value":"šŸ“„ Paper: ","raw":"šŸ“„ Paper: "},{"type":"resource","resource":{"type":"paper","id":"2405.03121"},"url":"https://huggingface.co/papers/2405.03121","raw":"https://huggingface.co/papers/2405.03121","label":"AniTalker: Animate Vivid and Diverse Talking Faces through\n Identity-Decoupled Facial Motion Encoding (2405.03121)"},{"type":"new_line","raw":"\n"},{"type":"new_line","raw":"\n"},{"type":"text","value":"🌐 Github Page: ","raw":"🌐 Github Page: "},{"type":"link","href":"https://x-lance.github.io/AniTalker","raw":"https://x-lance.github.io/AniTalker"},{"type":"new_line","raw":"\n"},{"type":"text","value":"šŸ“ Repository: ","raw":"šŸ“ Repository: "},{"type":"link","href":"https://github.com/X-LANCE/AniTalker","raw":"https://github.com/X-LANCE/AniTalker"},{"type":"new_line","raw":"\n"},{"type":"new_line","raw":"\n"},{"type":"text","value":"šŸ“š More Papers: more cutting-edge research presented at other conferences in the ","raw":"šŸ“š More Papers: more cutting-edge research presented at other conferences in the "},{"type":"resource","resource":{"type":"space","id":"DmitryRyumin/NewEraAI-Papers"},"url":"https://huggingface.co/spaces/DmitryRyumin/NewEraAI-Papers","raw":"https://huggingface.co/spaces/DmitryRyumin/NewEraAI-Papers"},{"type":"text","value":" curated by ","raw":" curated by "},{"type":"mention","user":"DmitryRyumin","raw":"@DmitryRyumin"},{"type":"new_line","raw":"\n"},{"type":"new_line","raw":"\n"},{"type":"text","value":"šŸš€ Added to the Avatars Collection: ","raw":"šŸš€ Added to the Avatars Collection: "},{"type":"resource","resource":{"type":"collection","id":"DmitryRyumin/avatars-65df37cdf81fec13d4dbac36"},"url":"https://huggingface.co/collections/DmitryRyumin/avatars-65df37cdf81fec13d4dbac36","raw":"https://huggingface.co/collections/DmitryRyumin/avatars-65df37cdf81fec13d4dbac36"},{"type":"new_line","raw":"\n"},{"type":"new_line","raw":"\n"},{"type":"text","value":"šŸ” Keywords: #AniTalker #FacialAnimation #DynamicAvatars #FaceSynthesis #TalkingFaces #DiffusionModel #ComputerGraphics #DeepLearning #ComputerVision #Innovation","raw":"šŸ” Keywords: #AniTalker #FacialAnimation #DynamicAvatars #FaceSynthesis #TalkingFaces #DiffusionModel #ComputerGraphics #DeepLearning #ComputerVision #Innovation"}],"rawContent":"šŸš€šŸŽ­šŸŒŸ New Research Alert - AniTalker (Avatars Collection)! šŸŒŸšŸŽ­šŸš€\nšŸ“„ Title: AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding šŸ”\n\nšŸ“ Description: AniTalker is a new framework that transforms a single static portrait and a single input audio file into animated, talking videos with natural, fluid movements.\n\nšŸ‘„ Authors: Tao Liu, Feilong Chen, Shuai Fan, @cpdu, Qi Chen, Xie Chen, and Kai Yu\n\nšŸ“„ Paper: https://huggingface.co/papers/2405.03121\n\n🌐 Github Page: https://x-lance.github.io/AniTalker\nšŸ“ Repository: https://github.com/X-LANCE/AniTalker\n\nšŸ“š More Papers: more cutting-edge research presented at other conferences in the https://huggingface.co/spaces/DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin\n\nšŸš€ Added to the Avatars Collection: https://huggingface.co/collections/DmitryRyumin/avatars-65df37cdf81fec13d4dbac36\n\nšŸ” Keywords: #AniTalker #FacialAnimation #DynamicAvatars #FaceSynthesis #TalkingFaces #DiffusionModel #ComputerGraphics #DeepLearning #ComputerVision #Innovation","author":{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg","fullname":"Dmitry Ryumin","name":"DmitryRyumin","type":"user","isPro":false,"isHf":false,"isFollowing":false},"attachments":[{"type":"image","url":"https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/3LR0f0SVt8RVA-rd9-AZ8.gif"},{"type":"image","url":"https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/31PhXMjqfZAI75JC4-CIJ.gif"},{"type":"image","url":"https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/eOZZGgvft_ok_3DIJIc2R.png"},{"type":"image","url":"https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/hEVYcLX3kaUTGzgFZbwGY.png"},{"type":"image","url":"https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/jdpToUZDPw-TB_ZgXOfkb.png"},{"type":"image","url":"https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/VhpBsCn75r6e6X9Uv9hKa.png"},{"type":"image","url":"https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/IUNuc9JZlsdKXUQgsi95A.png"},{"type":"image","url":"https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/1OmED4fbAod-gxHJUbKsm.png"},{"type":"image","url":"https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/sXt_3WETdPU5vV1jmTrji.png"},{"type":"image","url":"https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/Mr1vDFwGSiDNCCVOgWTPv.png"},{"type":"image","url":"https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/44SwVGgP2P_yq2U2QJ5s9.png"}],"mentions":[{"avatarUrl":"/avatars/79a0cc8c5bae3c422160002dbc2869ce.svg","fullname":"Chenpeng Du","name":"cpdu","type":"user","isPro":false,"isHf":false},{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg","fullname":"Dmitry Ryumin","name":"DmitryRyumin","type":"user","isPro":false,"isHf":false}],"reactions":[{"reaction":"šŸ”„","users":["DmitryRyumin","ivan-ft","samiraeli","osanseviero","KvrParaskevi","samusenps","AlekseiPravdin"],"count":7},{"reaction":"šŸ‘","users":["SunixLiu","samusenps","kevinpics","AlekseiPravdin"],"count":4},{"reaction":"šŸ¤—","users":["DmitryRyumin","samusenps"],"count":2}],"publishedAt":"2024-05-12T10:47:30.000Z","updatedAt":"2024-05-12T12:33:17.350Z","commentators":[{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1628885133347-6116d0584ef9fdfbf45dc4d9.jpeg","fullname":"Mohamed Rashad","name":"MohamedRashad","type":"user","isPro":true,"isHf":false,"isFollowing":false},{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg","fullname":"Dmitry Ryumin","name":"DmitryRyumin","type":"user","isPro":false,"isHf":false,"isFollowing":false}],"url":"/posts/DmitryRyumin/698577433443867","totalUniqueImpressions":869,"numComments":2}],"totalPosts":48,"spaces":[{"author":"DmitryRyumin","authorData":{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg","fullname":"Dmitry Ryumin","name":"DmitryRyumin","type":"user","isPro":false,"isHf":false},"colorFrom":"gray","colorTo":"red","createdAt":"2024-01-14T16:35:50.000Z","emoji":"šŸ¤©šŸ”šŸ”„","id":"DmitryRyumin/NewEraAI-Papers","lastModified":"2024-05-15T04:18:04.000Z","likes":18,"pinned":true,"private":false,"repoType":"space","runtime":{"stage":"RUNNING","hardware":{"current":"cpu-basic","requested":"cpu-basic"},"storage":null,"gcTimeout":172800,"replicas":{"current":1,"requested":1},"devMode":false,"domains":[{"domain":"dmitryryumin-neweraai-papers.hf.space","isCustom":false,"stage":"READY"}]},"shortDescription":"Collections of the Best AI Conferences šŸ”","title":"NewEraAI Papers","isLikedByUser":false}],"u":{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg","isPro":false,"fullname":"Dmitry Ryumin","user":"DmitryRyumin","orgs":[{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1651774865582-60f1abe7544c2adfd699860c.png","fullname":"Gradio-Blocks-Party","name":"Gradio-Blocks","userRole":"contributor","type":"org","isHf":false},{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1679420283402-608b8bb39d7c9519b4adae19.png","fullname":"Gradio-Themes-Party","name":"Gradio-Themes","userRole":"contributor","type":"org","isHf":false},{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60a551a34ecc5d054c8ad93e/Ku5nM2bKq-8ZF3Jid1ocw.png","fullname":"Blog-explorers","name":"blog-explorers","userRole":"read","type":"org","isHf":false},{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60f1abe7544c2adfd699860c/jqGdWcdsgsHIK_mYahpbU.png","fullname":"ICCV2023","name":"ICCV2023","userRole":"contributor","type":"org","isHf":false},{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6493306970d925ae80523a53/jD37on0_AIpwv0Njydsnb.png","fullname":"New Era Artificial Intelligence","name":"NewEraAI","userRole":"admin","type":"org","isHf":false},{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5f17f0a0925b9863e28ad517/33rvDIrCmr6wpK3_W6RGz.png","fullname":"ZeroGPU Explorers","name":"zero-gpu-explorers","userRole":"read","type":"org","isHf":false},{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5f17f0a0925b9863e28ad517/nxmdd6m86cxu55UZBlQeg.jpeg","fullname":"Social Post Explorers","name":"social-post-explorers","userRole":"read","type":"org","isHf":false},{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5f17f0a0925b9863e28ad517/V8fnWFEWwXTgCQuIHnPmk.png","fullname":"Dev Mode Explorers","name":"dev-mode-explorers","userRole":"read","type":"org","isHf":false},{"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63691c3eda9b693c2730b2a2/WoOIHdJahrAnLo1wpyVjc.png","fullname":"Journalists on Hugging Face","name":"JournalistsonHF","userRole":"contributor","type":"org","isHf":false}],"signup":{"github":"DmitryRyumin","details":"Machine Learning and Applications, Multi-Modal Understanding","homepage":"https://dmitryryumin.github.io","twitter":""},"isHf":false,"type":"user"},"upvotes":26,"repoFilterModels":{"sortKey":"modified"},"repoFilterDatasets":{"sortKey":"modified"},"repoFilterSpaces":{"sortKey":"modified"},"numFollowers":261,"numFollowing":6,"isFollowing":false,"isFollower":false,"sampleFollowers":[{"user":"dongchans","fullname":"Dong Chan Shin","type":"user","isPro":false,"avatarUrl":"/avatars/f50c05ee8b3105d20a8b291cc9f06ae4.svg"},{"user":"BlackB","fullname":"Thanadol Daroonsri","type":"user","isPro":false,"avatarUrl":"/avatars/1b78ac3490a362352a4762462eed66dc.svg"},{"user":"sayhan","fullname":"Sayhan YalvaƧer","type":"user","isPro":false,"avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65aa2d4b356bf23b4a4da247/-lRjUf8LtY0NlWw9qU5Hs.jpeg"},{"user":"DigitalsDazzle","fullname":"Tawanda Nicole Lum","type":"user","isPro":false,"avatarUrl":"/avatars/29acb1917d6e1a9be53ea3078cb5b2df.svg"}],"isWatching":false}">

Dmitry Ryumin

DmitryRyumin

AI & ML interests

Machine Learning and Applications, Multi-Modal Understanding

Organizations

Posts 48

view post
Post
543
šŸš€šŸ‘•šŸŒŸ New Research Alert - SIGGRAPH 2024 (Avatars Collection)! šŸŒŸšŸ‘ššŸš€
šŸ“„ Title: LayGA: Layered Gaussian Avatars for Animatable Clothing Transfer šŸ”

šŸ“ Description: LayGA is a novel method for animatable clothing transfer that separates the body and clothing into two layers for improved photorealism and accurate clothing tracking, outperforming existing methods.

šŸ‘„ Authors: Siyou Lin, Zhe Li, Zhaoqi Su, Zerong Zheng, Hongwen Zhang, and Yebin Liu

šŸ“… Conference: SIGGRAPH, 28 Jul – 1 Aug, 2024 | Denver CO, USA šŸ‡ŗšŸ‡ø

šŸ“„ Paper: LayGA: Layered Gaussian Avatars for Animatable Clothing Transfer (2405.07319)

🌐 Github Page: https://jsnln.github.io/layga/index.html

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #LayGA #AnimatableClothingTransfer #VirtualTryOn #AvatarTechnology #SIGGRAPH2024 #ComputerGraphics #DeepLearning #ComputerVision #Innovation
view post
Post
869
šŸš€šŸŽ­šŸŒŸ New Research Alert - AniTalker (Avatars Collection)! šŸŒŸšŸŽ­šŸš€
šŸ“„ Title: AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding šŸ”

šŸ“ Description: AniTalker is a new framework that transforms a single static portrait and a single input audio file into animated, talking videos with natural, fluid movements.

šŸ‘„ Authors: Tao Liu, Feilong Chen, Shuai Fan, @cpdu , Qi Chen, Xie Chen, and Kai Yu

šŸ“„ Paper: AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding (2405.03121)

🌐 Github Page: https://x-lance.github.io/AniTalker
šŸ“ Repository: https://github.com/X-LANCE/AniTalker

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #AniTalker #FacialAnimation #DynamicAvatars #FaceSynthesis #TalkingFaces #DiffusionModel #ComputerGraphics #DeepLearning #ComputerVision #Innovation

models

None public yet

datasets

None public yet