title
stringlengths 4
172
| link
stringlengths 27
86
| article
stringlengths 4
40.1k
|
---|---|---|
FiftyOne Computer Vision Datasets Come to the Hugging Face Hub | https://hf.co/blog/jamarks/fiftyone-datasets-come-to-hf-hub | 📚Resources |
⚗️ 🔥 Building High-Quality Datasets with distilabel and Prometheus 2 | https://hf.co/blog/burtenshaw/distilabel-prometheus-2 | Resource |
Expert-Level Tutorials on Stable Diffusion & SDXL: Master Advanced Techniques and Strategies | https://hf.co/blog/MonsterMMORPG/expert-level-tutorials-on-stable-diffusion-gen-ai | Tutorial Videos |
Wikipedia's Treasure Trove: Advancing Machine Learning with Diverse Data | https://hf.co/blog/frimelle/wikipedias-treasure-trove-ml-data | More Wikimedia data on Hugging Face - How? |
Introducing Tenzin 1.0: | https://hf.co/blog/Tar9897/my-first-model |
<p>
Tenzin: A Technical Exploration into Achieving Artificial General Intelligence
Artificial General Intelligence (AGI) represents the zenith of artificial intelligence research—a machine capable of understanding, learning, and applying knowledge across a wide array of tasks at a level comparable to human intelligence. While various models and approaches have been pursued in the quest for AGI, Tenzin stands out due to its unique methodologies and promising potential. This article delves into the technical differences between Tenzin and traditional Large Language Models (LLMs), elucidating why Tenzin has a significant chance of achieving AGI.</p>
<ol>
<li>Core Philosophical Differences
Traditional LLMs:</li>
</ol>
<p>Data-Driven Approach: Traditional LLMs, such as OpenAI's GPT series, rely on immense datasets and sophisticated neural network architectures. These models are fundamentally designed for next-token prediction in a sequence, excelling at generating human-like text.
Pattern Recognition: They identify and reproduce patterns found in training data, which constrains their ability to generalize beyond specific contexts.
Tenzin:</p>
<p>Integrated Cognitive Framework: Tenzin is built on a holistic learning philosophy, combining symbolic reasoning, probabilistic inference, and experiential learning.
Abstract Concept Manipulation: This approach enables Tenzin to understand and manipulate abstract concepts, enhancing its adaptability and versatility in novel situations.</p>
<ol start="2">
<li>Learning Mechanisms
LLMs:</li>
</ol>
<p>Supervised Learning: LLMs predominantly rely on supervised learning with human-annotated data, which is both time-consuming and costly.
Predefined Knowledge: Their performance is limited by the scope of the training data.
Tenzin:</p>
<p>Hybrid Learning: Tenzin employs a combination of supervised, unsupervised, and reinforcement learning.
Supervised Learning: Utilizes labeled data to learn specific tasks.
Unsupervised Learning: Discovers patterns and structures in data without explicit labels.
Reinforcement Learning: Adapts strategies based on feedback from the environment through trial and error.
Continuous Improvement: This multi-faceted approach allows Tenzin to develop a more comprehensive understanding and improve over time.</p>
<ol start="3">
<li>Cognitive Architecture
LLMs:</li>
</ol>
<p>Text-Centric: The architecture of LLMs is primarily focused on text generation and comprehension.
Limited Reasoning: These models excel at language-related tasks but struggle with complex reasoning or multi-modal understanding.
Tenzin:</p>
<p>Brain-Inspired Design: Tenzin's cognitive architecture is inspired by the human brain, incorporating memory, attention, and perception.
Multi-Modal Integration: It processes and integrates information from multiple sources, including text, images, and sensory data.
Advanced Reasoning: This capability allows Tenzin to engage in sophisticated reasoning and problem-solving.</p>
<ol start="4">
<li>Flexibility and Adaptability
LLMs:</li>
</ol>
<p>Static Knowledge Base: Traditional LLMs are rigid and cannot easily adapt to new information without extensive retraining.
Slow Adaptation: This inflexibility is a drawback in dynamic environments requiring rapid learning.
Tenzin:</p>
<p>Continuous Learning Framework: Tenzin is designed for adaptability, updating its knowledge base and refining skills in real-time.
Real-Time Updates: This ensures Tenzin remains relevant and effective as the external environment evolves.</p>
<ol start="5">
<li>Real-World Applications
LLMs:</li>
</ol>
<p>Success in Specific Domains: LLMs have been successful in applications like chatbots, content generation, and translation services.
Contextual Limitations: Their utility is often constrained by limited contextual understanding and insight generation.
Tenzin:</p>
<p>Broad Cognitive Abilities: Tenzin’s capabilities extend to a wider range of complex environments and tasks.
Scientific Research: In scientific domains, Tenzin can assist in hypothesis generation, data analysis, and experimental design.
Medical Diagnosis: Tenzin’s ability to synthesize information from medical literature, patient records, and real-time data makes it valuable in diagnostic and treatment planning.
Autonomous Systems: For autonomous systems like robotics and self-driving cars, Tenzin’s multi-modal integration and real-time learning enhance decision-making and adaptability.
Creative Industries: Tenzin’s abstract reasoning and knowledge synthesis capabilities can drive innovation in creative fields like art, music, and literature.
Advanced Technical Aspects
God’s Algorithm
Concept: God’s Algorithm refers to an optimal solution to a problem where the solution can be found in the shortest possible time, assuming infinite computational power.</p>
<p>Tenzin's Approach:</p>
<p>Symbolic Reasoning: Tenzin utilizes symbolic reasoning to explore problem spaces efficiently, similar to how God’s Algorithm operates.
Optimal Pathfinding: By combining symbolic AI with heuristic methods, Tenzin can approximate optimal solutions in complex domains, such as solving puzzles or optimizing logistics.
A* Search Algorithm
Concept: The A* algorithm is a widely used pathfinding and graph traversal algorithm, known for its efficiency in finding the shortest path from a start node to a goal node.</p>
<p>Tenzin's Implementation:</p>
<p>Heuristic-Driven Search: Tenzin incorporates A* in its reasoning processes, using heuristic functions to estimate the cost of reaching the goal from a given state.
Adaptation and Learning: Tenzin can dynamically adjust its heuristics based on real-time feedback, improving its search efficiency over time.
Grover’s Algorithm
Concept: Grover’s Algorithm is a quantum algorithm that provides a quadratic speedup for unstructured search problems.</p>
<p>Tenzin’s Potential Utilization:</p>
<p>Quantum-Enhanced Learning: While Tenzin is primarily based on classical computation, integrating quantum algorithms like Grover’s could significantly enhance its search capabilities.
Hybrid Quantum-Classical Systems: Tenzin could leverage quantum computing for specific tasks requiring massive parallelism and speed, such as cryptographic analysis or large-scale data mining.
Conclusion
Tenzin represents a significant departure from traditional LLMs, offering a more integrated and adaptive approach to artificial intelligence. Its unique combination of symbolic reasoning, probabilistic inference, and experiential learning provides a robust foundation for achieving AGI. By incorporating advanced algorithms such as God’s Algorithm, A*, and potentially even quantum-enhanced methods like Grover’s, Tenzin is poised to push the boundaries of what is possible in AI. While challenges remain, Tenzin's distinctive methodologies and promising capabilities suggest that it has a strong chance of reaching the ambitious goal of AGI, ultimately transforming the landscape of artificial intelligence and its applications in the real world.</p>
|
Mergoo: Efficiently Build Your Own MoE LLM | https://hf.co/blog/alirezamsh/mergoo | Learn More |
Fine-tuning LLMs with Singular Value Decomposition | https://hf.co/blog/fractalego/svd-training | References |
Introducing UNA-ThePitbull Series | https://hf.co/blog/fblgit/una-thepitbull | Bonus |
Indexify: Bringing HuggingFace Models to Real-Time Pipelines for Production Applications | https://hf.co/blog/rishiraj/announcing-indexify | Start Using Indexify |
HelpingAI 9B: Cutting Edge Emotionally Intelligent AI | https://hf.co/blog/KingNish/helpingai-9b | Conclusion: |
How to directly access 150k+ Hugging Face Datasets with DuckDB and query using GPT-4o | https://hf.co/blog/chilijung/access-150k-hugging-face-datasets-with-duckdb | Start asking questions |
FaceChain-FACT: Open-source 10-second portrait generation, reusing massive LoRa styles, a base-model-friendly portrait application. | https://hf.co/blog/haoyufirst/facechain-fact | Expansion & Co-construction |
Revolutionizing Human-Computer Interaction: The Emotional Intelligence and Ethical Impact of HelpingAI-9B | https://hf.co/blog/Abhaykoul/helpingai | 7.3. Closing Remarks |
So WTF is an Audio Embedding Model? | https://hf.co/blog/cappuch/audio-embedding-wtf | What Can Audio Embedding Model Be Used For |
Orchestration of Experts: The First-Principle Multi-Model System | https://hf.co/blog/alirezamsh/leeroo-multi-model-system | Citation |
How to Fine-Tune Custom Embedding Models Using AutoTrain | https://hf.co/blog/abhishek/finetune-custom-embeddings-autotrain | Summary |
GPU Poor Savior: Revolutionizing Low-Bit Open Source LLMs and Cost-Effective Edge Computing | https://hf.co/blog/NicoNico/green-bit-llm | Links: |
Not Legal Advice on AI Training Data in Japan | https://hf.co/blog/leonardlin/ai-training-data-in-japan | Terms of Service and Synthetic Data |
Sales Forecasting with Image Regression | https://hf.co/blog/tonyassi/image-regression | About Me |
AI has a problem with objectifying women | https://hf.co/blog/sasha/objectifying-women-in-ai |
<p>
May 24, 2024</p>
<p>Last week, OpenAI did a much-publicized <a href="https://www.wired.com/story/openai-gpt-4o-model-gives-chatgpt-a-snappy-flirty-upgrade/" rel="nofollow">demo</a> of their new chatbot, ChatGPT 4.0, now endowed with a speech interface. One of the voices used during their demo, nickname Sky, instantly attracted widespread attention, as much with the servile attitude that it adopted, commenting upon what one of the presenters was wearing, as its eerie resemblance to the voice used in “Her”, Spike Jonze’s 2013 movie about a man who falls in love with his operating system, voiced by Scarlett Johansson. Sam Altman, the CEO of OpenAI, had even <a href="https://x.com/sama/status/1790075827666796666" rel="nofollow">tweeted a single-world, “her”,</a> the same day, cementing the connection in people’s minds.</p>
<p>But a week after the ChatGPT 4.0 demo, Scarlett herself issued a <a href="https://www.wired.com/story/scarlett-johansson-says-openai-ripped-off-her-voice-for-chatgpt/" rel="nofollow">statement</a> saying that OpenAI mimicked her voice despite her refusal to collaborate with the company on multiple occasions. The company had already <a href="https://x.com/OpenAI/status/1792443575839678909" rel="nofollow">disabled</a> the voice over the weekend, but the question remains - if a Hollywood actress feels used and abused by the way an AI tool misrepresents her voice and likeness, what hope is there for the rest of us?</p>
<p>I’ve been working in AI for over a decade, in a field that has <a href="https://www.wired.com/story/artificial-intelligence-researchers-gender-imbalance/" rel="nofollow">less than 12%</a> of female researchers. I’m often the only woman speaking on a panel or sitting at the table, and - I must admit - making sure that 50% of the world’s population is represented in technology that has the potential to change humanity is truly exhausting. And yet, the small choices that we make when creating AI systems can have wide-ranging repercussions that can contribute to entrenched perceptions and persistent stereotypes. </p>
<p>Take Lenna Forsten, whose <a href="https://www.wired.com/story/finding-lena-the-patron-saint-of-jpegs/" rel="nofollow">image</a> was used as the industry standard for testing compression algorithms and machine learning models alike, despite her repeated requests for the image to stop being used. As far as I know, all of the official copies of it have since been taken down (before, you could load it automatically in libraries like <a href="https://github.com/opencv/opencv/issues/17461" rel="nofollow">OpenCV</a> and <a href="https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.misc.lena.html" rel="nofollow">SciPy</a>), but people still use it in academic papers and model demos, since it has acquired a cult status in the AI community.</p>
<p>Apart from depictions of individual women, representations of the entire gender are often incredibly objectifying and demeaning: for years in computer vision, a perfectly acceptable task used for testing the performance of AI models was putting on makeup on images of women, or swapping out their clothing from jeans to miniskirts and back. </p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/60edd0133e2c73a9a21455f5/H_DFFyLUKcIp7Lq0BCJS2.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/60edd0133e2c73a9a21455f5/H_DFFyLUKcIp7Lq0BCJS2.png"/></a>
Image source: <a href="https://arxiv.org/pdf/1907.01144" rel="nofollow">Zhang et al. (2019)</a></p>
<p>Motivated by the obvious advertising applications of these systems, the question of consent and representation was sorely lacking, and each time I would speak up, I would face pushback - it was just a “benchmark”, after all.</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/60edd0133e2c73a9a21455f5/WnRJLZ6mN7uhi8WQRWL2B.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/60edd0133e2c73a9a21455f5/WnRJLZ6mN7uhi8WQRWL2B.png"/></a>
Image source: <a href="https://openreview.net/pdf?id=ryxwJhC9YX" rel="nofollow">Mo et al. (2019)</a></p>
<p>With the advent of increasingly realistic image generation models, the objectification of women has only gotten worse. Similarly to commercials of women in bikinis eating burgers used to sell pickup trucks, AI-generated images of women - celebrities, but also anonymous people from the internet - are often used to demonstrate how good image generation models have gotten. Because what’s better to show the improvement of an image generation model than a half-naked woman in a fried chicken bikini?</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/60edd0133e2c73a9a21455f5/ABzOSFUP4w0o7GTDsjO5v.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/60edd0133e2c73a9a21455f5/ABzOSFUP4w0o7GTDsjO5v.png"/></a></p>
<p>Image source: <a href="https://www.reddit.com/r/StableDiffusion/comments/10a31di/your_stable_diffusion_year_in_review/" rel="nofollow">Reddit</a></p>
<p>This is despite AI-enabled image generation models having documented problematic behaviors, from the <a href="https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/" rel="nofollow">spontaneous generation of nude images of women</a> to the <a href="https://huggingface.co/spaces/society-ethics/StableBias">gender and racial biases</a> that are hard-baked into these models. AI undoubtedly has a problem with the objectification of women, and this comes with consequences both on Hollywood celebrities as well as mere mortals like myself. </p>
<p>But not all is lost - there are many actions that can be taken to shift the status quo. For decision makers, being more cognizant of issues of gender and power can translate into emphasizing diversity in positions of power, like <a href="https://www.wired.com/story/women-in-tech-openai-board/" rel="nofollow">boards</a> and executive roles. </p>
<p>For AI developers, this can be done by avoiding the implicit objectification of women via system demos, not using the common “traditionally attractive young nubile woman” images as the go-to example, and having explicit consent of those people you chose to illustrate your system. </p>
<p>For the community at large, we can do better at supporting organizations like <a href="https://www.wiml.org/" rel="nofollow">Women in Machine Learning</a> (of which I’m on the board!) to amplify the voices of women in our field and empower future generations.</p>
<p>Because the situation with Scarlett isn’t the first, and will be far from the last instance of this kind of treatment of women and minorities by people in positions of power in AI. And pushing back on this treatment – and demanding respect, consent and a seat at the table – can help turn the tide on AI’s longstanding tradition of objectifying women.</p>
|
Training MoE on AWS Trainium | https://hf.co/blog/ynakashima/training-moe-on-aws-trainium | Conclusion |
Let's talk about LLM evaluation | https://hf.co/blog/clefourrier/llm-evaluation | Acknowledgements |
Synthetic dataset generation techniques: generating custom sentence similarity data | https://hf.co/blog/davanstrien/synthetic-similarity-datasets | Conclusion |
Journey With Me Into The Mind of Large Language Models: Interesting Findings in AnthropicAI's Scaling Monosemanticity paper. | https://hf.co/blog/Jaward/journey-with-me-into-the-mind-of-llms |
<p>
One of the many unknowns with LLMs is the why behind the responses they give - it's unclear why certain responses are chosen over others. Which shows how little we know of what's happening inside these models. </p>
<p>To have a deeper sense of this, they tried Sparse Dictionary Learning on a larger model (Claude 3 Sonnet) - wherein they match patterns of neuron activations (named Features) to human interpretable meanings.</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/eHYPLpd4eBsh-I6nqepHf.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/eHYPLpd4eBsh-I6nqepHf.png"/></a></p>
<p>Now Dictionary Learning is a traditional ml technique that identifies recurring patterns of neuron activations across various contexts. Meaning, any internal state of the model can be expressed as a combination of a few active features rather than numerous active neurons.</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/8TTBChQxAYHV8UFtCuoK6.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/8TTBChQxAYHV8UFtCuoK6.png"/></a></p>
<p>They scaled up a more effective measure of dictionary learning using a Sparse Autoencoder (SAE). The SAE has an encoder that maps inputs to sparse high-dimensional features via linear transformation & ReLU, and a decoder that reconstructs inputs from those features.</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/p8zaWM-0xRRx2PUlr8XLR.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/p8zaWM-0xRRx2PUlr8XLR.png"/></a></p>
<p>Three variants (of sizes: ~1M, ~4M & ~34M features) of the SAE were trained and across SAEs, <300 active features/token, >65% variance were explained. With dead features: ~2% for 1M, 35% for 4M, 65% for 34M SAE. Implying better training could reduce dead features.</p>
<p>Experiments were conducted with these SAEs where they were applied to residual stream activations (RSAs) at the model's middle layer (why? 1. RSAs are smaller than MLP layers = low compute cost, 2. helps tackle "cross-layer superposition" issues - when features are spread across multiple layers instead of being isolated in specific layers, causing interpretation difficulties). These experiments revealed that Scaling Laws can help guide training of these SAEs.</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/dA9YbJbY1HdWZnL8NzTQc.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/dA9YbJbY1HdWZnL8NzTQc.png"/></a></p>
<p>My favorite of course is the Basic Code Features - where the model attributed meaning to different code syntax elements similar to syntax highlighting in text editors.</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/j54RReEQe9Is3DW9KEUWC.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/j54RReEQe9Is3DW9KEUWC.png"/></a></p>
|
Enjoy the Power of Phi-3 with ONNX Runtime on your device | https://hf.co/blog/Emma-N/enjoy-the-power-of-phi-3-with-onnx-runtime | Enjoy Phi-3 on your device |
What is going on with AlphaFold3? | https://hf.co/blog/as-cle-bert/what-is-going-on-with-alphafold3 | References |
Decoding GPT-4'o': In-Depth Exploration of Its Mechanisms and Creating Similar AI. | https://hf.co/blog/KingNish/decoding-gpt-4o | Making of Similar AI |
Sora | https://hf.co/blog/Kvikontent/sora | Ready to try it out? |
Explaining the SDXL latent space | https://hf.co/blog/TimothyAlexisVass/explaining-the-sdxl-latent-space | Back to top |
Diffusion Models | https://hf.co/blog/Esmail-AGumaan/diffusion-models | Citation: |
Evaling llm-jp-eval (evals are hard) | https://hf.co/blog/leonardlin/llm-jp-eval-eval |
<p>
With training of <a href="https://wandb.ai/augmxnt/shisa-v2/" rel="nofollow">shisa-v2</a> starting in earnest, I've been digging a bit more into <a href="https://github.com/llm-jp/llm-jp-eval" rel="nofollow">llm-jp-eval</a>, which I used as a quick and simple benchmark to help to track shisa-v1 (especially the base model) performance during development.</p>
<p>WandB has implement the <a href="https://wandb.ai/wandb-japan/llm-leaderboard/reports/Nejumi-LLM-Leaderboard-Evaluating-Japanese-Language-Proficiency--Vmlldzo2MzU3NzIy" rel="nofollow">Nejumi LLM Leaderboard</a> which has a <a href="https://github.com/wandb/llm-leaderboard" rel="nofollow">test harness</a> for modifying settings and uploading/viewing on WandB (here's my workspace: <a href="https://wandb.ai/augmxnt/nejumi" rel="nofollow">https://wandb.ai/augmxnt/nejumi</a>).</p>
<p>First I was curious to see if I could replicate the <a href="https://huggingface.co/augmxnt/shisa-gamma-7b-v1">shisa-gamma-7b-v1</a> results, which despite (IMO) being superceded by generally stronger JA models, still actually ranks at the top of the JA-tuned open models. But if you look for <a href="https://huggingface.co/augmxnt/shisa-7b-v1">shisa-7b-v1</a> you'll find that it basically does not complete. These models are basically tuned on the same dataset, so what's going on exactly?</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/63a7422854f1d0225b075bfc/CoQ7eYMhzZxryfIVmex5_.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63a7422854f1d0225b075bfc/CoQ7eYMhzZxryfIVmex5_.png"/></a></p>
<p>Well, for the second part, that's actually relatively easy to answer. Both the <a href="https://huggingface.co/augmxnt/shisa-base-7b-v1">base model</a> and final <a href="https://huggingface.co/augmxnt/shisa-7b-v1">shisa-7b</a> were trained requiring a <code>bos_token</code> (see <a href="https://huggingface.co/augmxnt/shisa-7b-v1#prompt-format">note</a>) and it simply won't output sensible output without it. You can see that when using a <a href="https://github.com/shisa-ai/nejumi/blob/main/configs/shisa-7b-v1.prompt-jaster-bostoken.yaml" rel="nofollow">config with the default prompt with a bos_token</a> prepended, it <a href="https://wandb.ai/augmxnt/nejumi/runs/ahzrypj9" rel="nofollow">scores almost as well as shisa-gamma-7b-v1</a>.</p>
<p>Now as for untangling the other mystery - surprisingly, when testing with the native (llama2) chat template, it actually performs much worse. Why is this? Well, <a href="https://wandb.ai/augmxnt/nejumi/reports/Weave-jaster_output_table_dev-24-05-17-23-44-51---Vmlldzo3OTkxMjE3" rel="nofollow">let's look at the output</a>. It turns out that when you use the Japanese prompt... the model tends to want to answer in Japanese:</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/63a7422854f1d0225b075bfc/Jv1_jdxnvn-CdrdKUOkVL.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63a7422854f1d0225b075bfc/Jv1_jdxnvn-CdrdKUOkVL.png"/></a></p>
<p>This is significantly less likely to happen when not using fine-tuned chat formatting (and it reverts to act as more of a base model). As logprobs aren't used, well, if you reply in Japanese, the scores are a big fat zero on those benchmarks. I did some clicking around and found this and other "formatting" related correctness issues to be prevalent as many of the scores collected are exact match scores (this is important because below a certain size/capability level, LLMs will be pretty brittle; also fine-tuning for single word answers tends to destroy general performance for smaller models). Let's just dive in to one example.</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/63a7422854f1d0225b075bfc/P-MyvwWAs3PgWNxKq_aPW.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63a7422854f1d0225b075bfc/P-MyvwWAs3PgWNxKq_aPW.png"/></a></p>
<p>If you order on the Nejumi Leaderboard by JA MT-Bench scores, you can see Lightblue's <a href="https://huggingface.co/lightblue/qarasu-14B-chat-plus-unleashed">lightblue/qarasu-14B-chat-plus-unleashed</a> is the top scoring JA open model for MT-Bench, but it has a relatively abysmal <code>0.1437</code> on the llm-jp-eval. WandB has <a href="https://wandb.ai/wandb-japan/llm-leaderboard" rel="nofollow">info on all the runs they've done</a> as well, so clicking in, let's <a href="https://wandb.ai/wandb-japan/llm-leaderboard/runs/xwmmpnly" rel="nofollow">take a look</a>. This model scores straight 0's on many tests/categories. Let's see for <code>janli</code>:</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/63a7422854f1d0225b075bfc/KawWXTNJhvsGxzC43EaZM.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63a7422854f1d0225b075bfc/KawWXTNJhvsGxzC43EaZM.png"/></a></p>
<p>Here it's returning Japanese, not English. How about for <code>jcommonsense</code>?</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/63a7422854f1d0225b075bfc/oxwk045F1L6IA1xQbkOrz.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63a7422854f1d0225b075bfc/oxwk045F1L6IA1xQbkOrz.png"/></a></p>
<p>This requires more checking, but I believe it has to be a parsing error... And how about <code>mawps_exact</code>?</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/63a7422854f1d0225b075bfc/fU-0EGq5grpPsUUwc9Oy3.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63a7422854f1d0225b075bfc/fU-0EGq5grpPsUUwc9Oy3.png"/></a></p>
<p>OK, well, looks like Qarasu is not the best at math, but even when it <em>does</em> get the right answer, it still gets a 0, since it's scored as an exact match of <em>only</em> a number as an answer. I mean, maybe this tests 0-shot instruction following, but I'm not sure it's as meaningful as it could be (or testing what people think it is) when they look at a "math" score.</p>
<p><code>llm-jp-eval</code> 1.1.0 runs very quickly (for <code>shisa-7b-v1</code>, with it's speedy tokenizer, it runs in 3min on an H100) and I think it has its place, but I've seen most of that Japanese LLM community has decided to use it as a "gold standard" type of benchmark, and I'm not sure that everyone realizes what exactly a good score does or does not mean. Some thoughts on improving it:</p>
<ul>
<li><p>While NLP performance is and important aspect for LLMs, randomly clicking through responses, I see for many models, the score reflects formatting as much as performance. I think the instructions/prompts probably need to be improved, along with few-shotting.</p>
</li>
<li><p>I believe that templating is defintely an issue and I think that moving forward, everyone should be supporting a model's <a href="https://github.com/chujiezheng/chat_templates" rel="nofollow">chat_template</a> if provided in the tokenizer. I think there probably needs to be some thought as well as for whether JA language models are supposed to reply to JA questions in EN or not... Rather than exact match, maybe logprob is the way to go, like what <a href="https://github.com/EleutherAI/lm-evaluation-harness" rel="nofollow">lm-eval</a> does?</p>
</li>
<li><p>Of course, all of these tests have <a href="https://github.com/llm-jp/llm-jp-eval/blob/main/DATASET.md" rel="nofollow">train sets</a> available, and you could train on them (we threw the train sets into <a href="https://huggingface.co/datasets/augmxnt/ultra-orca-boros-en-ja-v1">ultra-orca-boros-en-ja-v1</a> and found it improved our overall JA performance, but <a href="https://github.com/llm-jp/llm-jp-sft#for-the-13b-model-single-node-single-a100-40gb-gpu-1" rel="nofollow">when used as the primary sft</a>, it actually obliterated all general performance (eg basic Japanese coherence, ability to respond in more than one word answers) when I tested their original 1.0 "jaster" fine tunes of models.</p>
</li>
</ul>
<p>llm-jp-eval continues to iterate, and it looks like with 1.3.0, they've been adding wiki and other comprehensions tests, but without moving to multiple choice logprob or oracle/llm-as-judge, well, maybe something to revisit. Here's some <a href="https://wandb.ai/llm-jp-eval/test-eval/reports/llm-jp---Vmlldzo1NzE0NjA1?accessToken=s09hm7xrqg43ls8i25am6t0r7iwjpninwzeelqqgbx53zivlm9s04ixfpv3xgiwm" rel="nofollow">more information (and their own leaderboard) from the llm-jp-eval team</a>.</p>
<p>Anyway, was a bit surprising when I started digging in today, so just thought I'd share.</p>
<p>BTW, JA MT-Bench has its own problems. Non-JA trained models sometimes score surprisngly well on it (GPT-4 as judge). Well, running some simple JA detection code, shows that some of the highest scoring models are <em>simply not outputting Japanese</em>:</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/63a7422854f1d0225b075bfc/EW1Ql9lELArXEakMZCTMo.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63a7422854f1d0225b075bfc/EW1Ql9lELArXEakMZCTMo.png"/></a></p>
<p>~80% is the cutoff for whether it's actually replying in Japanese or not.</p>
<p>Note for those looking to run similar testing, I used a dead simple (but good enough for rough cut) heuristic:</p>
<pre><code>def is_japanese_char(char):
# Check if the character falls within the Japanese Unicode codepoint ranges
return any([
0x3040 <= ord(char) <= 0x309F, # Hiragana
0x30A0 <= ord(char) <= 0x30FF, # Katakana
0x4E00 <= ord(char) <= 0x9FFF # Kanji
])
</code></pre>
|
2024-04-22 - Hub Incident Post Mortem | https://hf.co/blog/mcpotato/hub-incident-post-mortem-20240422 | Timeline |
Hugging Face + Google Visual Blocks | https://hf.co/blog/radames/hugging-face-google-visual-blocks | Acknowledgements |
Multimodal Augmentation for Documents: Recovering “Comprehension” in “Reading and Comprehension” task | https://hf.co/blog/danaaubakirova/doc-augmentation | References |
Synthetic dataset generation techniques: Self-Instruct | https://hf.co/blog/davanstrien/self-instruct | Using Self Instruct |
Glaze and the Effectiveness of Anti-AI Methods for Diffusion Models | https://hf.co/blog/parsee-mizuhashi/glaze-and-anti-ai-methods | Conclusion |
RFDiffusion Potentials | https://hf.co/blog/AmelieSchreiber/rfdiffusion-potentials | Example 3: Combining <code>substrate_contacts</code>, <code>monomer_ROG</code>, and <code>monomer_contacts</code> for motif scaffolding |
Exploration of Job Application Automation with Data Scraping | https://hf.co/blog/herooooooooo/automation-job-applications-with-python-and-ollama | Conclusion |
Everything About Long Context Fine-tuning | https://hf.co/blog/wenbopan/long-context-fine-tuning | What's Next |
Advancing Open-source Large Language Models in the Medical & Healthcare Domain | https://hf.co/blog/aaditya/openbiollm | Detailed Medical Subjectwise accuracy |
Energy Star Ratings for AI Models | https://hf.co/blog/sasha/energy-star-ai-proposal | Future Work |
Train Custom Models on Hugging Face Spaces with AutoTrain SpaceRunner | https://hf.co/blog/abhishek/autotrain-spacerunner |
<p>
Did you know you could train your custom models on Hugging Face Spaces!!!? Yes, its possible and super-easy to do with AutoTrain SpaceRunner 💥 All you need is a Hugging Face account (which you probably have already) and a payment method attached to your account (in case you want to use GPUs, CPU training is free!). So, stop spending time setting up everything on other cloud providers and use AutoTrain SpaceRunner to train your models: the training environment is set up for you already and you can install/uninstall any requirements you might have for your project! Sounds exciting? Let's see how to do it!</p>
<p>The first step would be to create a project folder. The project folder can consist of anything but must have a <code>script.py</code> file. This script file is the entry-point:</p>
<pre><code>-- my_project
---- some_module
---- some_other_module
---- script.py
---- requirements.txt
</code></pre>
<p><code>requirements.txt</code> is optional and required only if you want to add/remove anything. For example, the following requirements.txt removes xgboost which is preinstalled and then installs catboost:</p>
<pre><code>-xgboost
catboost
</code></pre>
<p><code>-</code> in front of package name means uninstall.</p>
<p>How should <code>script.py</code> look like? </p>
<p>Well, however you want it to look like. Here's a sample:</p>
<pre><code>for _ in range(10):
print("Hello World!")
</code></pre>
<p>You can do <em>anything</em> you want in the <code>script.py</code>. Imports from local modules is also possible as long as they are present in the project directory.</p>
<p>The final step is to run the code on Spaces. Here's how to do it.</p>
<p>Install AutoTrain if not done already, <code>pip install -U autotrain-advanced</code>. Then you can run <code>autotrain spacerunner --help</code>. This will give you all the arguments needed.</p>
<pre><code>❯ autotrain spacerunner --help
usage: autotrain <command> [<args>] spacerunner [-h] --project-name PROJECT_NAME --script-path SCRIPT_PATH --username USERNAME --token TOKEN
--backend {spaces-a10gl,spaces-a10gs,spaces-a100,spaces-t4m,spaces-t4s,spaces-cpu,spaces-cpuf}
[--env ENV] [--args ARGS]
✨ Run AutoTrain SpaceRunner
options:
-h, --help show this help message and exit
--project-name PROJECT_NAME
Name of the project. Must be unique.
--script-path SCRIPT_PATH
Path to the script
--username USERNAME Hugging Face Username, can also be an organization name
--token TOKEN Hugging Face API Token
--backend {spaces-a10gl,spaces-a10gs,spaces-a100,spaces-t4m,spaces-t4s,spaces-cpu,spaces-cpuf}
Hugging Face backend to use
--env ENV Environment variables, e.g. --env FOO=bar;FOO2=bar2;FOO3=bar3
--args ARGS Arguments to pass to the script, e.g. --args foo=bar;foo2=bar2;foo3=bar3;store_true_arg
</code></pre>
<p><code>--project-name</code> is the unique name to create space and dataset (containing your project files) on Hugging Face Hub. Everything is stored privately and you can delete it after your script has run.</p>
<p><code>--script-path</code> is local path to directory which contains <code>script.py</code>. </p>
<p>Need to pass environment variables? use <code>--env</code> and <code>--args</code> if you need to pass arguments to <code>script.py</code>.</p>
<p>You can choose any of the <code>spaces-*</code> backend to run your code. The space will pause itself (thus saving you money) when done. 🚀</p>
<p>Here's an example command:</p>
<pre><code class="language-bash">$ autotrain spacerunner \
--project-name custom_llama_training \
--script-path /path/to/script/py/ \
--username abhishek \
--token <span class="hljs-variable">$HF_WRITE_TOKEN</span> \
--backend spaces-a10g-large \
--args padding=right;push_to_hub
--<span class="hljs-built_in">env</span> TOKENIZERS_PARALLELISM=<span class="hljs-literal">false</span>;TRANSFORMERS_VERBOSITY=error
</code></pre>
<p>Locally, the script is run like:</p>
<pre><code class="language-bash">$ TOKENIZERS_PARALLELISM=<span class="hljs-literal">false</span>;TRANSFORMERS_VERBOSITY=error python script.py --padding right --push_to_hub
</code></pre>
<p>Available Backends:</p>
<pre><code>"spaces-a10g-large": "a10g-large",
"spaces-a10g-small": "a10g-small",
"spaces-a100-large": "a100-large",
"spaces-t4-medium": "t4-medium",
"spaces-t4-small": "t4-small",
"spaces-cpu-upgrade": "cpu-upgrade",
"spaces-cpu-basic": "cpu-basic",
"spaces-l4x1": "l4x1",
"spaces-l4x4": "l4x4",
"spaces-a10g-largex2": "a10g-largex2",
"spaces-a10g-largex4": "a10g-largex4",
</code></pre>
<p>After running the spacerunner command, you will be provided with link to space to monitor your training. As simple as that!</p>
<p>Note: autotrain spacerunner will not save artifacts on its own, so you must have code to save the artifacts/outputs in your <code>script.py</code>. P.S. save them in a huggingface datasets repo ;) </p>
<p>Questions? Comments? Feature requests? Issues? Use the GitHub issues for AutoTrain Advanced: <a href="https://github.com/huggingface/autotrain-advanced" rel="nofollow">https://github.com/huggingface/autotrain-advanced</a> ⭐️</p>
|
makeMoE: Implement a Sparse Mixture of Experts Language Model from Scratch | https://hf.co/blog/AviSoori1x/makemoe-from-scratch | Putting it all together |
Can we create pedagogically valuable multi-turn synthetic datasets from Cosmopedia? | https://hf.co/blog/davanstrien/cosmochat | TODO |
Evalverse: Revolutionizing Large Language Model Evaluation with a Unified, User-Friendly Framework | https://hf.co/blog/Yescia/evalverse-llm-evaluation-opensource | Citation |
🧑⚖️ "Replacing Judges with Juries" using distilabel | https://hf.co/blog/alvarobartt/replacing-judges-with-juries-distilabel | References |
Fish Speech V1 - New Multilingual Open Source TTS Model | https://hf.co/blog/lengyue233/fish-speech-1 | Next Steps |
Google Search with LLM | https://hf.co/blog/nand-tmp/google-search-with-llm | How to use RAG method to access the entire internet with ML |
Token Merging for fast LLM inference : Background and first trials with Mistral | https://hf.co/blog/samchain/token-merging-fast-inference | Links |
⚗️ 🧑🏼🌾 Let's grow some Domain Specific Datasets together | https://hf.co/blog/burtenshaw/domain-specific-datasets | <strong>5. Review and share the dataset</strong> |
Expanding Model Context and Creating Chat Models with a Single Click | https://hf.co/blog/maywell/llm-feature-transfer | Links |
Estimating Memory Consumption of LLMs for Inference and Fine-Tuning for Cohere Command-R+ | https://hf.co/blog/Andyrasika/memory-consumption-estimation | Conclusion |
Post-OCR-Correction: 1 billion words dataset of automated OCR correction by LLM | https://hf.co/blog/Pclanglais/post-ocr-correction | Current results and use cases |
Can We Train Chat Models with Raw Data? | https://hf.co/blog/maywell/layer-aware-1 | This simple experiment was designed and conducted based on empirical intuition rather than theoretical grounds. |
RealWorldQA, What's New? | https://hf.co/blog/KennyUTC/realworldqa | Takeaway |
How to Finetune phi-3 on MacBook Pro | https://hf.co/blog/abhishek/phi3-finetune-macbook |
<p>
In this blog, i'll show you how you can train/finetune the latest phi-3 model from Microsoft on your macbook pro! You'll need an M1 or M2 mac to do this. We will be using AutoTrain Advanced!</p>
<p>To install AutoTrain Advanced, you can do:</p>
<pre><code class="language-bash">$ pip install autotrain-advanced
</code></pre>
<p>Note: autotrain doesnt install pytorch, torchvision etc. So, you need to install it yourself. You can create a conda environment and install these dependencies:</p>
<pre><code class="language-bash">$ conda create -n autotrain python=3.10
$ conda activate autotrain
$ conda install pytorch::pytorch torchvision torchaudio -c pytorch
$ pip install autotrain-advanced
</code></pre>
<p>Once done, you can use the AutoTrain CLI or the UI on your mac machine! We will take a look at both!</p>
<p>AutoTrain not only offers LLM finetuning but many other tasks such as text classification, image classification, dreambooth lora, etc. But in this blog post, we are looking into LLM finetuning.</p>
<p>You can see all parameters you can adjust for llm finetuning by doing</p>
<pre><code class="language-bash">$ autotrain llm --<span class="hljs-built_in">help</span>
</code></pre>
<p>The next step is grabbing the data. In this blog I'm going to show you how you can train on your macbook with SFT training and ORPO tuning (the big but small brother of DPO).</p>
<ul>
<li>For SFT training, we need a dataset with a single text column. We can use something like <a href="https://huggingface.co/datasets/timdettmers/openassistant-guanaco">timdettmers/openassistant-guanaco</a> or alpaca like datasets. Note: these datasets are already formatted as text with system prompt, user instruction and assistant message. If you have them in a format like the following:</li>
</ul>
<pre><code>[ { "content": "Definition: In this task, you need to count the number of vowels (letters 'a', 'e', 'i', 'o', 'u') / consonants (all letters other than vowels) in the given sentence.\nInput: Sentence: 'a baseball player is in his hitting stance as a few people watch'. Count the number of consonants in the given sentence.\nOutput:", "role": "user" }, { "content": "32", "role": "assistant" } ]
</code></pre>
<p>you can use AutoTrain's chat-template parameter. We will see it later in this post, but for ORPO training. So, we will cover SFT training using pre-formatted dataset and do ORPO training with chat template.</p>
<ul>
<li>For ORPO training, you can use a dataset like <a href="https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized">argilla/distilabel-capybara-dpo-7k-binarized</a>. This dataset has many columns but we are interested only in <code>chosen</code> & <code>rejected</code> columns.</li>
</ul>
<p>With AutoTrain, only creating or finding the dataset will be the most time consuming part. Now, when we have the dataset, we can do a SFT training using:</p>
<pre><code class="language-bash">autotrain llm \
--train \
--model microsoft/Phi-3-mini-4k-instruct \
--data-path timdettmers/openassistant-guanaco \
--lr 2e-4 \
--batch-size 2 \
--epochs 1 \
--trainer sft \
--peft \
--project-name my-own-phi-3-on-mac \
--username abhishek \
--push-to-hub \
--token <span class="hljs-variable">$HF_TOKEN</span>
</code></pre>
<p>Where $HF_TOKEN is your hugging face write token in case you want to push the trained model to hugging face hub for easy deployment and sharing. You can find your tokens <a href="https://huggingface.co/settings/tokens">here</a>.</p>
<p>Note that, we are using lora and thats why we have the <code>--peft</code> parameter. Also, if the text column is not called <code>text</code> in your dataset, you can add another parameter <code>--text-column your_datasets_text_column</code>. In case you want to use your own CSV/JSON file instead of a hugging face hub dataset, you can call it train.csv / train.jsonl and place it in a local folder. Command to train will change slightly:</p>
<pre><code class="language-bash">autotrain llm \
--train \
--model microsoft/Phi-3-mini-4k-instruct \
--data-path /path/to/folder/containing/training/file \
--text-column text_column_in_your_dataset \
--lr 2e-4 \
--batch-size 2 \
--epochs 1 \
--trainer sft \
--peft \
--project-name my-own-phi-3-on-mac \
--username abhishek \
--push-to-hub \
--token <span class="hljs-variable">$HF_TOKEN</span>
</code></pre>
<p>Next, we come to orpo training. For orpo training we change <code>--trainer sft</code> to <code>--trainer orpo</code>.</p>
<pre><code class="language-bash">autotrain llm \
--train \
--model microsoft/Phi-3-mini-4k-instruct \
--data-path argilla/distilabel-capybara-dpo-7k-binarized \
--text-column chosen \
--rejected-text-column rejected \
--lr 2e-4 \
--batch-size 2 \
--epochs 1 \
--trainer orpo \
--chat-template chatml \
--peft \
--project-name my-own-phi-3-on-mac-orpo \
--username abhishek \
--push-to-hub \
--token <span class="hljs-variable">$HF_TOKEN</span>
</code></pre>
<p>There are 4 changes above. Its only the column mappings that have changed and the trainer and ofcourse, the dataset. One other major change is usage of <code>--chat-template</code> parameter which is set to <code>chatml</code>. For <code>--chat-template</code>, the choices are: <code>zephyr</code>, <code>chatml</code>, <code>tokenizer</code> or None. None is used if you already formatted the data properly yourself as we did in SFT training.</p>
<p>Now if CLI is too much for you, you can also use the UI! Which is even easier and allows you to upload files too.</p>
<p>To use the UI:</p>
<pre><code class="language-bash">$ <span class="hljs-built_in">export</span> HF_TOKEN=your_huggingface_write_token
$ autotrain app --host 127.0.0.1 --port 10000
</code></pre>
<p>Then go to <a href="http://127.0.0.1:10000" rel="nofollow">http://127.0.0.1:10000</a> in your browser and enjoy AutoTrain UI! 🚀 A screenshot with same params as for ORPO training above is shown below:</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/5fa19f4ba13e063b8b2b5e11/-YLNMBbo4mer1zytmuyDs.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/5fa19f4ba13e063b8b2b5e11/-YLNMBbo4mer1zytmuyDs.png"/></a></p>
<p>If you are not able to find phi3 in the model dropdown list, you can use this URL instead: <code>http://127.0.0.1:7860/?custom_models=microsoft/Phi-3-mini-4k-instruct</code>. Note: I've added phi-3 as custom model. You can do the same for any other compatible model from the Hub. ;)</p>
<p>Both SFT and ORPO trainings were successfully tested on a M2 Max MacBook Pro.</p>
<p>In case of any issues or feature requests, use <a href="https://github.com/huggingface/autotrain-advanced" rel="nofollow">github repo</a>.</p>
<p>P.S. An extensive documentation of AutoTrain can be found <a href="https://hf.co/docs/autotrain" rel="nofollow">here</a></p>
<p>Happy (auto)training! 🚀 </p>
|
Fine Tuning a LLM Using Kubernetes with Intel® Xeon® Scalable Processors | https://hf.co/blog/dmsuehir/llama2-fine-tuning-k8s | Citations |
LLM Comparison/Test: Llama 3 Instruct 70B + 8B HF/GGUF/EXL2 (20 versions tested and compared!) | https://hf.co/blog/wolfram/llm-comparison-test-llama-3 | TL;DR: Observations & Conclusions |
Outpainting III - Inpaint Model | https://hf.co/blog/OzzyGT/outpainting-inpaint-model | 4.- Final touch-ups |
Outpainting II - Differential Diffusion | https://hf.co/blog/OzzyGT/outpainting-differential-diffusion |
<p>
This is the third guide about outpainting, if you want to read about the other methods here they are:</p>
<ul>
<li><a href="https://huggingface.co/blog/OzzyGT/outpainting-controlnet">Outpainting I - Controlnet version</a></li>
<li><a href="https://huggingface.co/blog/OzzyGT/outpainting-inpaint-model">Outpainting III - Inpaint Model</a></li>
</ul>
<p>In this guide I'll explore how to do outpainting with <a href="https://github.com/exx8/differential-diffusion" rel="nofollow">differential diffusion</a> in depth going though each of the steps I did to get good results.</p>
<p>I’ll start with a non-square image that has a depth of field (bokeh) to make it more difficult. When they have this kind of background, it’s really easy to see the seams. This is an image that I grabbed from <a href="https://unsplash.com/photos/a-man-in-a-white-hoodie-holding-a-camera-Qj9kZ05paJI" rel="nofollow">Unsplash</a>:</p>
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/d2B1x4BAWf2kzJzHBBeFU.jpeg" width="400"/>
</p>
<p>So, the first task is to make it a square image (expand it) so we can keep making it bigger, but I’ll generate images of 1024x1024 each time as this is the optimal resolution for SDXL.</p>
<p>Then, I’ll test the result if I just use the new area with a gray background. But to also do that, we need to create a mask that can work with differential diffusion. For this, I’ll move the margin 50 pixels to the left and apply a blur filter. This helps to smooth the transition.</p>
<div class="max-w-full overflow-auto">
<table>
<thead><tr>
<th>squared image</th>
<th>mask</th>
<th>blurred mask</th>
</tr>
</thead><tbody><tr>
<td><img src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/cq6MwMDP0EULAcePp3_O3.png" width="240"/></td>
<td><img src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/6LZSgargIzkyCEec5Ids9.png" width="240"/></td>
<td><img src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/viDk7p8KfvJt59G_bfU-N.png" width="240"/></td>
</tr>
</tbody>
</table>
</div>
<p>We're going to use the community pipeline <code>StableDiffusionXLDifferentialImg2ImgPipeline</code> and it's loaded like this:</p>
<pre><code class="language-python">pipeline = StableDiffusionXLPipeline.from_pretrained(
<span class="hljs-string">"stabilityai/stable-diffusion-xl-base-1.0"</span>,
torch_dtype=torch.float16,
variant=<span class="hljs-string">"fp16"</span>,
custom_pipeline=<span class="hljs-string">"pipeline_stable_diffusion_xl_differential_img2img"</span>,
).to(<span class="hljs-string">"cuda"</span>)
image = pipeline(
prompt=prompt=<span class="hljs-string">""</span>,
negative_prompt=<span class="hljs-string">""</span>,
width=<span class="hljs-number">1024</span>,
height=<span class="hljs-number">1024</span>,
guidance_scale=<span class="hljs-number">6.0</span>,
num_inference_steps=<span class="hljs-number">25</span>,
original_image=image,
image=image,
strength=<span class="hljs-number">1.0</span>,
<span class="hljs-built_in">map</span>=mask,
).images[<span class="hljs-number">0</span>]
</code></pre>
<p>At this point, If we generate the image without a prompt, the model will think that the gray area is a gray object like a wall:</p>
<div class="max-w-full overflow-auto">
<table>
<thead><tr>
<th></th>
<th></th>
</tr>
</thead><tbody><tr>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/I3RqrhTHGsF_HymdvR7KE.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/I3RqrhTHGsF_HymdvR7KE.png"/></a></td>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/B56-oBfA0QFDfEyu11XRx.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/B56-oBfA0QFDfEyu11XRx.png"/></a></td>
</tr>
</tbody>
</table>
</div>
<p>If this is used by someone who knows how to draw, that person could make a rough drawing and generate the image. Since I’m not that person, I’ll need to think of a prompt for the new outpainting area.</p>
<p>For this, we can create it ourselves, use an online chatbot like GPT-4V or Bing Chat, or a local VLLM like Llava. Personally, I always like to use local VLLMs, and this one got my attention: <a href="https://huggingface.co/internlm/internlm-xcomposer2-vl-7b-4bit">internlm-xcomposer2-vl-7b-4bit</a> because it works really well, even with just the 4-bit version.</p>
<p>This is what I got:</p>
<p><code>The image captures a man standing on the shore of a body of water, possibly a lake or river. He is wearing a white hoodie with the word 'evolution' written across it and khaki pants. A green backpack is slung over his shoulders, and he holds a camera in his hands. The backdrop features a mountain range under a clear sky.</code></p>
<p>for a comparison, this is what bing gave me:</p>
<p><code>The image depicts a photographer, dressed in outdoor gear and holding a professional camera, set against a stunning backdrop of a serene lake and snow-capped mountains. It’s a beautiful blend of human activity and natural beauty.</code></p>
<p>When doing inpainting or outpainting, the prompt is really important, as an example, these are the results with both prompts:</p>
<div class="max-w-full overflow-auto">
<table>
<thead><tr>
<th>XComposer2</th>
<th>XComposer2</th>
</tr>
</thead><tbody><tr>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/UEv34tUgaZe2iK3whfJHE.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/UEv34tUgaZe2iK3whfJHE.png"/></a></td>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/zut9yVjpopcntfDak1Rfz.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/zut9yVjpopcntfDak1Rfz.png"/></a></td>
</tr>
<tr>
<td>Bing</td>
<td>Bing</td>
</tr>
<tr>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/Zfwqu2PvEV1Yl7HKNohA8.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/Zfwqu2PvEV1Yl7HKNohA8.png"/></a></td>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/JXZJe8PddDV4VjErR-LBt.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/JXZJe8PddDV4VjErR-LBt.png"/></a></td>
</tr>
</tbody>
</table>
</div>
<p>For this specific image, and perhaps for SDXL in general, the prompt generated by XComposer2 is better because it describes the image without exaggerated words like <code>stunning backdrop</code>, <code>beautiful blend</code> or <code>natural beauty</code>.</p>
<p>Taking the XComposer2 prompt and fixing the seed, let’s see how differential diffusion works.</p>
<div class="max-w-full overflow-auto">
<table>
<thead><tr>
<th>normal mask</th>
<th>blurred mask</th>
</tr>
</thead><tbody><tr>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/LrKK0JAOsW4Pj_Hnu0cde.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/LrKK0JAOsW4Pj_Hnu0cde.png"/></a></td>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/UEv34tUgaZe2iK3whfJHE.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/UEv34tUgaZe2iK3whfJHE.png"/></a></td>
</tr>
<tr>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/pW3WyohlhKW3sFP-iEurG.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/pW3WyohlhKW3sFP-iEurG.png"/></a></td>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/zut9yVjpopcntfDak1Rfz.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/zut9yVjpopcntfDak1Rfz.png"/></a></td>
</tr>
</tbody>
</table>
</div>
<p>We can see that differential diffusion blends the outpaint better with the original image, even when they’re totally different. Let’s see what happens when we increase the blur.</p>
<div class="max-w-full overflow-auto">
<table>
<thead><tr>
<th>blur radius 20</th>
<th>blur radius 40</th>
</tr>
</thead><tbody><tr>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/UEv34tUgaZe2iK3whfJHE.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/UEv34tUgaZe2iK3whfJHE.png"/></a></td>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/3UnIA9kEl48TzR1E9xTRf.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/3UnIA9kEl48TzR1E9xTRf.png"/></a></td>
</tr>
<tr>
<td>blur radius 80</td>
<td>blur radius 100</td>
</tr>
<tr>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/Zwp66u_PyVk2v43ox0N42.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/Zwp66u_PyVk2v43ox0N42.png"/></a></td>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/WuT5GikRpvv-Goi_BxOwM.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/WuT5GikRpvv-Goi_BxOwM.png"/></a></td>
</tr>
</tbody>
</table>
</div>
<p>Now, we can clearly see why differential diffusion is a really good method for inpainting and outpainting. With this outpaint area and with a blur of 80 or 100, the only reason we can see the seam is because of the color difference. Just take into account that the larger the blur and the area, the more the original image will change.</p>
<p>To solve this problem with the color, or at least attenuate it, we need to fill the new area with something else. Something that helps the model better understand what we want in the new area.</p>
<p>There are several techniques that can do this. Each of them helps but has different uses. For example, <a href="https://gfx.cs.princeton.edu/pubs/Barnes_2009_PAR/" rel="nofollow">PatchMatch</a> or <a href="https://github.com/advimman/lama" rel="nofollow">LaMa</a> helps with inpainting since they remove the content and fill it with a new one. For this use case, those don’t work that well because the area they need to fill is too big and completely new. So, I’ll use the <a href="https://docs.opencv.org/4.x/df/d3d/tutorial_py_inpainting.html" rel="nofollow">OpenCV</a> ones. In this case, I like the result with the <code>telea algorithm</code>.</p>
<p>To use this method, it’s necessary to install OpenCV for Python:</p>
<pre><code class="language-bash">pip install opencv-python
</code></pre>
<p>It’s not a good idea to convert images between multiple libraries because it can result in a loss of quality. So, for this, I’ll convert all the functions to <code>OpenCV</code>. The only major difference is the blur. To obtain an effect similar to <code>Pillow</code>, we need to use a much higher value. In this case, a blur radius of 500.</p>
<p>The mask we need for the Telea inpaint must be the same size as the original mask, without the offset, since that’s the area we want to replace.</p>
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/FxSD3uQPv_clPjAIekxJ-.png" width="400"/>
</p>
<p>We need the model to use this information. Normally, with an inpainting model or with a normal image-to-image model, we decrease the value of <code>strength</code>. But with differential, we can keep this value at the maximum and just make the mask lighter. I’ll use a dark gray for this.</p>
<div class="max-w-full overflow-auto">
<table>
<thead><tr>
<th></th>
<th></th>
</tr>
</thead><tbody><tr>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/KDd9FOHvAsUqr1B4VqizZ.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/KDd9FOHvAsUqr1B4VqizZ.png"/></a></td>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/mj58J-RkzarhqCFg5LQvP.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/mj58J-RkzarhqCFg5LQvP.png"/></a></td>
</tr>
<tr>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/QKwhkSK3frEoQbwmJYU9D.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/QKwhkSK3frEoQbwmJYU9D.png"/></a></td>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/PnMII7_IhhAvXjqpaDsni.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/PnMII7_IhhAvXjqpaDsni.png"/></a></td>
</tr>
</tbody>
</table>
</div>
<p>Now, we have some good results, but I still see two problems. We can still see the seam because there’s a slight difference in the colors, and we depend on the prompt to do this. If we pass the wrong prompt (which is highly probable if you use a VLLM), the outpainting will be bad.</p>
<p>To fix both of these problems, we’re going to use IP Adapters. This is pretty obvious; there’s no better way to tell the model the details of the original image than an Image Prompt.</p>
<p>The only problem we have right now is that the original image is not a square image, and IP Adapters only work with square images. There’s a solution to this proposed by the original authors that involves resizing and padding the image, but that would make it feed that information to the model, and we don’t want that because we’re precisely trying to paint that area.</p>
<p>Since we don’t really need to give it an exact composition and we can feed multiple images to the IP Adapter, what we’re going to do is to slice the original image into squares and feed those to the IP Adapter. For this, it’s better to use the larger initial image and then resize each square down to 224x224, which is the size they need.</p>
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/5a8nHJvw93I32zKPMyn2U.jpeg" width="400"/>
</p>
<p>This function can do this:</p>
<pre><code class="language-python"><span class="hljs-keyword">def</span> <span class="hljs-title function_">slice_image</span>(<span class="hljs-params">image</span>):
height, width, _ = image.shape
slice_size = <span class="hljs-built_in">min</span>(width // <span class="hljs-number">2</span>, height // <span class="hljs-number">3</span>)
slices = []
<span class="hljs-keyword">for</span> h <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(<span class="hljs-number">3</span>):
<span class="hljs-keyword">for</span> w <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(<span class="hljs-number">2</span>):
left = w * slice_size
upper = h * slice_size
right = left + slice_size
lower = upper + slice_size
<span class="hljs-keyword">if</span> w == <span class="hljs-number">1</span> <span class="hljs-keyword">and</span> right > width:
left -= right - width
right = width
<span class="hljs-keyword">if</span> h == <span class="hljs-number">2</span> <span class="hljs-keyword">and</span> lower > height:
upper -= lower - height
lower = height
<span class="hljs-built_in">slice</span> = image[upper:lower, left:right]
slices.append(<span class="hljs-built_in">slice</span>)
<span class="hljs-keyword">return</span> slices
</code></pre>
<p>These are the sliced images we get with it:</p>
<div class="max-w-full overflow-auto">
<table>
<thead><tr>
<th></th>
<th></th>
<th></th>
</tr>
</thead><tbody><tr>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/tWlAKHoUZTf5ZqhLWORJa.jpeg" rel="nofollow"><img alt="image/jpeg" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/tWlAKHoUZTf5ZqhLWORJa.jpeg"/></a></td>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/3PJUSNlnAwV8N47XEqKaV.jpeg" rel="nofollow"><img alt="image/jpeg" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/3PJUSNlnAwV8N47XEqKaV.jpeg"/></a></td>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/6MiUnq0bsQ52NT64Vg3Ue.jpeg" rel="nofollow"><img alt="image/jpeg" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/6MiUnq0bsQ52NT64Vg3Ue.jpeg"/></a></td>
</tr>
<tr>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/1do0Tbj2jJHf849L96DdC.jpeg" rel="nofollow"><img alt="image/jpeg" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/1do0Tbj2jJHf849L96DdC.jpeg"/></a></td>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/QdPKacnL9VmfESx2UjyP0.jpeg" rel="nofollow"><img alt="image/jpeg" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/QdPKacnL9VmfESx2UjyP0.jpeg"/></a></td>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/6XackYEJpxiV-s_h1pmax.jpeg" rel="nofollow"><img alt="image/jpeg" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/6XackYEJpxiV-s_h1pmax.jpeg"/></a></td>
</tr>
</tbody>
</table>
</div>
<p>Without a prompt and since we're feeding these images to the IP Adapter, we can lower the CFG to about 4.0</p>
<div class="max-w-full overflow-auto">
<table>
<thead><tr>
<th></th>
<th></th>
</tr>
</thead><tbody><tr>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/a9HwtuSXiJk9T2Io62Yjw.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/a9HwtuSXiJk9T2Io62Yjw.png"/></a></td>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/1J0C_sxlD5PDALS0UWjyv.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/1J0C_sxlD5PDALS0UWjyv.png"/></a></td>
</tr>
<tr>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/nnilpwvde14mzF4t7HoRp.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/nnilpwvde14mzF4t7HoRp.png"/></a></td>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/mEJTo5O4o7F_LbWzzcrMb.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/mEJTo5O4o7F_LbWzzcrMb.png"/></a></td>
</tr>
</tbody>
</table>
</div>
<p>Sometimes we can get images that still have seams but most of the time they're good and we fixed the color difference because the IP Adapter gave that information to the model.</p>
<p>Now we have a script that can expand portrait/landscape images without the need of a prompt, these are tests I did with other images:</p>
<div class="max-w-full overflow-auto">
<table>
<thead><tr>
<th>original</th>
<th>expanded</th>
</tr>
</thead><tbody><tr>
<td><img src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/LWUfqp84eQBbQuoq_4rns.jpeg" width="300"/></td>
<td><img src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/8YCJz0SZjkNpzYOJEpw_K.png" width="400"/></td>
</tr>
<tr>
<td><img src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/fLfW9zMPShz4MsKTBXDF0.jpeg" width="300"/></td>
<td><img src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/iaEFuj6o7Gh1L-PBFheHq.png" width="400"/></td>
</tr>
<tr>
<td><img src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/juaTntZtcDhjeIs6tYFWz.jpeg" width="300"/></td>
<td><img src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/MztS3W_TqCiodj4QyLfOj.png" width="400"/></td>
</tr>
<tr>
<td><img src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/hha3zjPLQzdFsuWkn4cY0.png" width="300"/></td>
<td><img src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/ui-YG_y_J7F7VEgDQ7LVd.png" width="400"/></td>
</tr>
</tbody>
</table>
</div>
<p>With this method, if the subject that you want to preserve is positioned at the border, it will change a little because we’re using a blurred mask. If you don’t want this, you can try to reduce the blur and the offset of the mask. If that doesn’t work, the only alternative is to use an inpainting model.</p>
<p>There are also some images that won’t work with this method. For example, this one:</p>
<div class="max-w-full overflow-auto">
<table>
<thead><tr>
<th>original</th>
<th>expanded</th>
</tr>
</thead><tbody><tr>
<td><img src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/gf_Mt805NoRtNtK9_Pi-M.jpeg" width="300"/></td>
<td><img src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/EO-kpsOxWlbbBPFfn6YSQ.png" width="400"/></td>
</tr>
</tbody>
</table>
</div>
<p>That’s because we only have half of the subject, and also the Telea algorithm expands the colors to the right. In this case, we can give it a little help with the prompt. I’ll use "colored eggs inside a round nest on a table":</p>
<div class="max-w-full overflow-auto">
<table>
<thead><tr>
<th></th>
<th></th>
</tr>
</thead><tbody><tr>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/inCYg9cHMorqZmgT0X82Q.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/inCYg9cHMorqZmgT0X82Q.png"/></a></td>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/saa5-9EniGkOI9-TKskMe.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/saa5-9EniGkOI9-TKskMe.png"/></a></td>
</tr>
</tbody>
</table>
</div>
<p>The model that you use is also very important. Some models perform outpainting better, while others are better suited for realistic photos or for specific genres like anime, fantasy, etc.</p>
<p>Now, the only thing we have left to do is to create really large outpaints:</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/Pe2eNLRXZw48TwOcMeiVG.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/Pe2eNLRXZw48TwOcMeiVG.png"/></a>
<a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/PM_05wPnU28OTOw50BZal.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/PM_05wPnU28OTOw50BZal.png"/></a>
<a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/9_j7hrBndwECXfbkbxQRN.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/9_j7hrBndwECXfbkbxQRN.png"/></a></p>
<p>This is the complete code. First, I make the image a square and then expand it. You can choose the direction in which to expand it. Please note that this is just a code example. You’ll need to modify it to suit your needs, but hopefully, this will help you get started with this kind of outpainting using diffusers and differential diffusion.</p>
<pre><code class="language-python"><span class="hljs-keyword">import</span> random
<span class="hljs-keyword">import</span> urllib.request
<span class="hljs-keyword">import</span> cv2
<span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np
<span class="hljs-keyword">import</span> torch
<span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> DPMSolverMultistepScheduler, StableDiffusionXLPipeline
<span class="hljs-keyword">def</span> <span class="hljs-title function_">merge_images</span>(<span class="hljs-params">original, new_image, offset, direction</span>):
<span class="hljs-keyword">if</span> direction <span class="hljs-keyword">in</span> [<span class="hljs-string">"left"</span>, <span class="hljs-string">"right"</span>]:
merged_image = np.zeros((original.shape[<span class="hljs-number">0</span>], original.shape[<span class="hljs-number">1</span>] + offset, <span class="hljs-number">3</span>), dtype=np.uint8)
<span class="hljs-keyword">elif</span> direction <span class="hljs-keyword">in</span> [<span class="hljs-string">"top"</span>, <span class="hljs-string">"bottom"</span>]:
merged_image = np.zeros((original.shape[<span class="hljs-number">0</span>] + offset, original.shape[<span class="hljs-number">1</span>], <span class="hljs-number">3</span>), dtype=np.uint8)
<span class="hljs-keyword">if</span> direction == <span class="hljs-string">"left"</span>:
merged_image[:, offset:] = original
merged_image[:, : new_image.shape[<span class="hljs-number">1</span>]] = new_image
<span class="hljs-keyword">elif</span> direction == <span class="hljs-string">"right"</span>:
merged_image[:, : original.shape[<span class="hljs-number">1</span>]] = original
merged_image[:, original.shape[<span class="hljs-number">1</span>] + offset - new_image.shape[<span class="hljs-number">1</span>] : original.shape[<span class="hljs-number">1</span>] + offset] = new_image
<span class="hljs-keyword">elif</span> direction == <span class="hljs-string">"top"</span>:
merged_image[offset:, :] = original
merged_image[: new_image.shape[<span class="hljs-number">0</span>], :] = new_image
<span class="hljs-keyword">elif</span> direction == <span class="hljs-string">"bottom"</span>:
merged_image[: original.shape[<span class="hljs-number">0</span>], :] = original
merged_image[original.shape[<span class="hljs-number">0</span>] + offset - new_image.shape[<span class="hljs-number">0</span>] : original.shape[<span class="hljs-number">0</span>] + offset, :] = new_image
<span class="hljs-keyword">return</span> merged_image
<span class="hljs-keyword">def</span> <span class="hljs-title function_">slice_image</span>(<span class="hljs-params">image</span>):
height, width, _ = image.shape
slice_size = <span class="hljs-built_in">min</span>(width // <span class="hljs-number">2</span>, height // <span class="hljs-number">3</span>)
slices = []
<span class="hljs-keyword">for</span> h <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(<span class="hljs-number">3</span>):
<span class="hljs-keyword">for</span> w <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(<span class="hljs-number">2</span>):
left = w * slice_size
upper = h * slice_size
right = left + slice_size
lower = upper + slice_size
<span class="hljs-keyword">if</span> w == <span class="hljs-number">1</span> <span class="hljs-keyword">and</span> right > width:
left -= right - width
right = width
<span class="hljs-keyword">if</span> h == <span class="hljs-number">2</span> <span class="hljs-keyword">and</span> lower > height:
upper -= lower - height
lower = height
<span class="hljs-built_in">slice</span> = image[upper:lower, left:right]
slices.append(<span class="hljs-built_in">slice</span>)
<span class="hljs-keyword">return</span> slices
<span class="hljs-keyword">def</span> <span class="hljs-title function_">process_image</span>(<span class="hljs-params"></span>
<span class="hljs-params"> image,</span>
<span class="hljs-params"> fill_color=(<span class="hljs-params"><span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span></span>),</span>
<span class="hljs-params"> mask_offset=<span class="hljs-number">50</span>,</span>
<span class="hljs-params"> blur_radius=<span class="hljs-number">500</span>,</span>
<span class="hljs-params"> expand_pixels=<span class="hljs-number">256</span>,</span>
<span class="hljs-params"> direction=<span class="hljs-string">"left"</span>,</span>
<span class="hljs-params"> inpaint_mask_color=<span class="hljs-number">50</span>,</span>
<span class="hljs-params"> max_size=<span class="hljs-number">1024</span>,</span>
<span class="hljs-params"></span>):
height, width = image.shape[:<span class="hljs-number">2</span>]
new_height = height + (expand_pixels <span class="hljs-keyword">if</span> direction <span class="hljs-keyword">in</span> [<span class="hljs-string">"top"</span>, <span class="hljs-string">"bottom"</span>] <span class="hljs-keyword">else</span> <span class="hljs-number">0</span>)
new_width = width + (expand_pixels <span class="hljs-keyword">if</span> direction <span class="hljs-keyword">in</span> [<span class="hljs-string">"left"</span>, <span class="hljs-string">"right"</span>] <span class="hljs-keyword">else</span> <span class="hljs-number">0</span>)
<span class="hljs-keyword">if</span> new_height > max_size:
<span class="hljs-comment"># If so, crop the image from the opposite side</span>
<span class="hljs-keyword">if</span> direction == <span class="hljs-string">"top"</span>:
image = image[:max_size, :]
<span class="hljs-keyword">elif</span> direction == <span class="hljs-string">"bottom"</span>:
image = image[new_height - max_size :, :]
new_height = max_size
<span class="hljs-keyword">if</span> new_width > max_size:
<span class="hljs-comment"># If so, crop the image from the opposite side</span>
<span class="hljs-keyword">if</span> direction == <span class="hljs-string">"left"</span>:
image = image[:, :max_size]
<span class="hljs-keyword">elif</span> direction == <span class="hljs-string">"right"</span>:
image = image[:, new_width - max_size :]
new_width = max_size
height, width = image.shape[:<span class="hljs-number">2</span>]
new_image = np.full((new_height, new_width, <span class="hljs-number">3</span>), fill_color, dtype=np.uint8)
mask = np.full_like(new_image, <span class="hljs-number">255</span>, dtype=np.uint8)
inpaint_mask = np.full_like(new_image, <span class="hljs-number">0</span>, dtype=np.uint8)
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
inpaint_mask = cv2.cvtColor(inpaint_mask, cv2.COLOR_BGR2GRAY)
<span class="hljs-keyword">if</span> direction == <span class="hljs-string">"left"</span>:
new_image[:, expand_pixels:] = image[:, : max_size - expand_pixels]
mask[:, : expand_pixels + mask_offset] = inpaint_mask_color
inpaint_mask[:, :expand_pixels] = <span class="hljs-number">255</span>
<span class="hljs-keyword">elif</span> direction == <span class="hljs-string">"right"</span>:
new_image[:, :width] = image
mask[:, width - mask_offset :] = inpaint_mask_color
inpaint_mask[:, width:] = <span class="hljs-number">255</span>
<span class="hljs-keyword">elif</span> direction == <span class="hljs-string">"top"</span>:
new_image[expand_pixels:, :] = image[: max_size - expand_pixels, :]
mask[: expand_pixels + mask_offset, :] = inpaint_mask_color
inpaint_mask[:expand_pixels, :] = <span class="hljs-number">255</span>
<span class="hljs-keyword">elif</span> direction == <span class="hljs-string">"bottom"</span>:
new_image[:height, :] = image
mask[height - mask_offset :, :] = inpaint_mask_color
inpaint_mask[height:, :] = <span class="hljs-number">255</span>
<span class="hljs-comment"># mask blur</span>
<span class="hljs-keyword">if</span> blur_radius % <span class="hljs-number">2</span> == <span class="hljs-number">0</span>:
blur_radius += <span class="hljs-number">1</span>
mask = cv2.GaussianBlur(mask, (blur_radius, blur_radius), <span class="hljs-number">0</span>)
<span class="hljs-comment"># telea inpaint</span>
_, mask_np = cv2.threshold(inpaint_mask, <span class="hljs-number">128</span>, <span class="hljs-number">255</span>, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
inpaint = cv2.inpaint(new_image, mask_np, <span class="hljs-number">3</span>, cv2.INPAINT_TELEA)
<span class="hljs-comment"># convert image to tensor</span>
inpaint = cv2.cvtColor(inpaint, cv2.COLOR_BGR2RGB)
inpaint = torch.from_numpy(inpaint).permute(<span class="hljs-number">2</span>, <span class="hljs-number">0</span>, <span class="hljs-number">1</span>).<span class="hljs-built_in">float</span>()
inpaint = inpaint / <span class="hljs-number">127.5</span> - <span class="hljs-number">1</span>
inpaint = inpaint.unsqueeze(<span class="hljs-number">0</span>).to(<span class="hljs-string">"cuda"</span>)
<span class="hljs-comment"># convert mask to tensor</span>
mask = torch.from_numpy(mask)
mask = mask.unsqueeze(<span class="hljs-number">0</span>).<span class="hljs-built_in">float</span>() / <span class="hljs-number">255.0</span>
mask = mask.to(<span class="hljs-string">"cuda"</span>)
<span class="hljs-keyword">return</span> inpaint, mask
<span class="hljs-keyword">def</span> <span class="hljs-title function_">image_resize</span>(<span class="hljs-params">image, new_size=<span class="hljs-number">1024</span></span>):
height, width = image.shape[:<span class="hljs-number">2</span>]
aspect_ratio = width / height
new_width = new_size
new_height = new_size
<span class="hljs-keyword">if</span> aspect_ratio != <span class="hljs-number">1</span>:
<span class="hljs-keyword">if</span> width > height:
new_height = <span class="hljs-built_in">int</span>(new_size / aspect_ratio)
<span class="hljs-keyword">else</span>:
new_width = <span class="hljs-built_in">int</span>(new_size * aspect_ratio)
image = cv2.resize(image, (new_width, new_height), interpolation=cv2.INTER_LANCZOS4)
<span class="hljs-keyword">return</span> image
pipeline = StableDiffusionXLPipeline.from_pretrained(
<span class="hljs-string">"SG161222/RealVisXL_V4.0"</span>,
torch_dtype=torch.float16,
variant=<span class="hljs-string">"fp16"</span>,
custom_pipeline=<span class="hljs-string">"pipeline_stable_diffusion_xl_differential_img2img"</span>,
).to(<span class="hljs-string">"cuda"</span>)
pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config, use_karras_sigmas=<span class="hljs-literal">True</span>)
pipeline.load_ip_adapter(
<span class="hljs-string">"h94/IP-Adapter"</span>,
subfolder=<span class="hljs-string">"sdxl_models"</span>,
weight_name=[
<span class="hljs-string">"ip-adapter-plus_sdxl_vit-h.safetensors"</span>,
],
image_encoder_folder=<span class="hljs-string">"models/image_encoder"</span>,
)
pipeline.set_ip_adapter_scale(<span class="hljs-number">0.1</span>)
<span class="hljs-keyword">def</span> <span class="hljs-title function_">generate_image</span>(<span class="hljs-params">prompt, negative_prompt, image, mask, ip_adapter_image, seed: <span class="hljs-built_in">int</span> = <span class="hljs-literal">None</span></span>):
<span class="hljs-keyword">if</span> seed <span class="hljs-keyword">is</span> <span class="hljs-literal">None</span>:
seed = random.randint(<span class="hljs-number">0</span>, <span class="hljs-number">2</span>**<span class="hljs-number">32</span> - <span class="hljs-number">1</span>)
generator = torch.Generator(device=<span class="hljs-string">"cpu"</span>).manual_seed(seed)
image = pipeline(
prompt=prompt,
negative_prompt=negative_prompt,
width=<span class="hljs-number">1024</span>,
height=<span class="hljs-number">1024</span>,
guidance_scale=<span class="hljs-number">4.0</span>,
num_inference_steps=<span class="hljs-number">25</span>,
original_image=image,
image=image,
strength=<span class="hljs-number">1.0</span>,
<span class="hljs-built_in">map</span>=mask,
generator=generator,
ip_adapter_image=[ip_adapter_image],
output_type=<span class="hljs-string">"np"</span>,
).images[<span class="hljs-number">0</span>]
image = (image * <span class="hljs-number">255</span>).astype(np.uint8)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
<span class="hljs-keyword">return</span> image
prompt = <span class="hljs-string">""</span>
negative_prompt = <span class="hljs-string">""</span>
direction = <span class="hljs-string">"right"</span> <span class="hljs-comment"># left, right, top, bottom</span>
inpaint_mask_color = <span class="hljs-number">50</span> <span class="hljs-comment"># lighter use more of the Telea inpainting</span>
expand_pixels = <span class="hljs-number">256</span> <span class="hljs-comment"># I recommend to don't go more than half of the picture so it has context</span>
times_to_expand = <span class="hljs-number">4</span>
url = <span class="hljs-string">"https://huggingface.co/datasets/OzzyGT/testing-resources/resolve/main/differential/photo-1711580377289-eecd23d00370.jpeg?download=true"</span>
<span class="hljs-keyword">with</span> urllib.request.urlopen(url) <span class="hljs-keyword">as</span> url_response:
img_array = np.array(<span class="hljs-built_in">bytearray</span>(url_response.read()), dtype=np.uint8)
original = cv2.imdecode(img_array, -<span class="hljs-number">1</span>)
image = image_resize(original)
expand_pixels_to_square = <span class="hljs-number">1024</span> - image.shape[<span class="hljs-number">1</span>] <span class="hljs-comment"># image.shape[1] for horizontal, image.shape[0] for vertical</span>
image, mask = process_image(
image, expand_pixels=expand_pixels_to_square, direction=direction, inpaint_mask_color=inpaint_mask_color
)
ip_adapter_image = []
<span class="hljs-keyword">for</span> index, part <span class="hljs-keyword">in</span> <span class="hljs-built_in">enumerate</span>(slice_image(original)):
ip_adapter_image.append(part)
generated = generate_image(prompt, negative_prompt, image, mask, ip_adapter_image)
final_image = generated
<span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(times_to_expand):
image, mask = process_image(
final_image, direction=direction, expand_pixels=expand_pixels, inpaint_mask_color=inpaint_mask_color
)
ip_adapter_image = []
<span class="hljs-keyword">for</span> index, part <span class="hljs-keyword">in</span> <span class="hljs-built_in">enumerate</span>(slice_image(generated)):
ip_adapter_image.append(part)
generated = generate_image(prompt, negative_prompt, image, mask, ip_adapter_image)
final_image = merge_images(final_image, generated, <span class="hljs-number">256</span>, direction)
cv2.imwrite(<span class="hljs-string">"result.png"</span>, final_image)
</code></pre>
|
Outpainting I - Controlnet version | https://hf.co/blog/OzzyGT/outpainting-controlnet | 6.- Outpaint tip with IP Adapter |
Exploring Emotionally Intelligent AI with HelpingAI | https://hf.co/blog/Abhaykoul/emotionally-intelligent-ai | 8.1. Embracing the Era of Emotionally Intelligent AI |
Fine-tune Llama 3 with ORPO | https://hf.co/blog/mlabonne/orpo-llama-3 | References |
Starting Tiny with Protein LLaMA | https://hf.co/blog/monsoon-nlp/greenbeing-and-protein-models | Limitations and Safety Notes |
Mixture of Depth is Vibe | https://hf.co/blog/joey00072/mixture-of-depth-is-vibe | Few Gotchas |
Custom architectures with HuggingFace 🤗 | https://hf.co/blog/not-lain/custom-architectures-with-huggingface | push to hub 🤗 |
Run the strongest open-source LLM model: Llama3 70B with just a single 4GB GPU! | https://hf.co/blog/lyogavin/llama3-airllm | Does Llama3’s success herald the rise of open-source models?? |
On Coding Your First Attention | https://hf.co/blog/Jaward/coding-your-first-attention |
<p>
While it’s not necessarily the case that you must code the attention block of a transformer from scratch to understand how it works, yet it sure is the closest you can get to having a first-principles understanding of why/how transformers behave the way they do.</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/4MMtJDefZBU8dpmHana6B.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/4MMtJDefZBU8dpmHana6B.png"/></a></p>
<p>@karpathy covered attention in detail in his nanoGPT video (strongly recommend watching). Now I would like to share some thoughts and experience in writing my first attention.</p>
<p>First let’s zoom out quickly and explain what attention is in transformers:
Attention in transformers is a communication mechanism that allows the model to focus on different parts of the input sequence when making predictions. </p>
<p>It assigns weights to each input token based on its relevance to the current context, enabling the model to weigh information selectively. This mechanism helps transformers capture long-range dependencies and contextual information effectively.</p>
<p>The official AIAN paper introduced two commonly used forms of attentions: Scaled Dot-Product Attention (also known as Self-Attention) and a stack of self-attention blocks known as Multi-Head Attention.</p>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#the-code" id="the-code" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
The Code
</span>
</h1>
<p>Now, attention as for most deep learning algorithms boils down to a math equation. So writing the code can get really trivial especially with a deep learning framework like PyTorch. Below is what's called a Single Head Attention:</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/c-RzcFcoyRVFqYCSxgvsS.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/c-RzcFcoyRVFqYCSxgvsS.png"/></a></p>
<p>The code defines single-head attention in PyTorch - it transforms input vectors, computes attention scores and weights, and then calculates the weighted sum of values based on these weights (as per the attention equation)</p>
<p>When you have multiple of those stacked in parallel, you get what's called Multi-Head Attention. This gives a much simpler code if you are inheriting from the SingleHeadAttention class:</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/QfUeWOfSU1J64OR7yIYUn.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/QfUeWOfSU1J64OR7yIYUn.png"/></a></p>
<p>This one creates multiple attentions (inheriting from SingleHeadAttention class) and stacks them in parallel. During the forward pass, it applies each head to the input tensors Q, K, and V, concatenates the outputs, and linearly transforms them to produce the final output.</p>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#end-note" id="end-note" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
End Note
</span>
</h1>
<p>So, in essence, having to code this made me revisit some key underlying concepts (pytorch's matmul, softmax, dropout, even backprop) that helped clear some stuff up. This along with Karpathy's nanoGPT video shaped my understanding - which is still a work in progress as new forms of the transformer architecture emerge.</p>
|
SVGDreamer: Text Guided Vector Graphics Generation with Diffusion Model | https://hf.co/blog/xingxm/svgdreamer | References |
Releasing Youtube-Commons: a massive open corpus for conversational and multimodal data | https://hf.co/blog/Pclanglais/youtube-commons |
<p>
We announce today the release of <a href="https://huggingface.co/datasets/PleIAs/YouTube-Commons">Youtube-Commons</a> on HuggingFace:</p>
<ul>
<li>Youtube-Commons is the largest corpus of video to date under an entirely free license.</li>
<li>Youtube-Commons comprises 2 million videos in CC-By with documented provenance and attribution.</li>
<li>We include the metadata and the full transcript, which also makes it one of the largest collections of conversational data with nearly 30 billion words.</li>
<li>Youtube-Commons is multilingual and includes translations in a variety of European languages.</li>
</ul>
<p>Youtube-Commons is a follow-up of Common-Corpus, an international initiative coordinated by <a href="/blog/Pclanglais/pleias.fr">Pleias</a> to release the largest open pre-training corpus coming from public domain sources. Youtube-Commons has similarly received the support of Lang:IA, a state start-up supported by the French Ministry of Culture and the Direction du numérique ((<a href="https://huggingface.co/AgentPublic">Agent Public</a>). Pleias is a French start-up specialized in the training of Large Language Models for document processing on fully open and auditable corpus.</p>
<p>Youtube-Commons is made of materials released by their original authors under a free license (CC-By). There is currently a debate over the ethical and legal use of these resources for pre-training large text or multimodal models. We consider that respecting the terms of the license (especially in regards to attribution) and the general philosophy of Creative Commons are critical for any future end use project: we provide the necessary metadata to do so and invite all future projects to maintain key principles of reproducibility, transparency and reciprocal contribution to the commons.</p>
<p>Despite its size, Youtube-Commons is still far from covering the entire available set of freely licensed content available on Youtube. One of our incentives for the releasing this corpus was the highly controversial use of copyright content from Youtube videos by OpenAI, both for GPT-4 (with more than 1 million transcripts) and, likely, for their video generation model, SORA. Through this release we aim to demonstrate that it is possible to conciliate AI development with scientific reproducibility and conformity to copyright law. We also seek to empower alternative, more ethical approaches.</p>
<p>Youtube-Commons is only a first step. We are currently expanding this collection both in size but also in content with their associated audio, image and video materials.</p>
|
Design choices for Vision Language Models in 2024 | https://hf.co/blog/gigant/vlm-design | Where are Vision-Language Models headed? |
It's raining diffusion personalization techniques☔️🎭🖼️ | https://hf.co/blog/linoyts/zero-shot-personalization |
<p>
Recently, generating high quality portraits from refrence photos was made possible with as little as a single reference image & without any optimization⚡️</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/638f308fc4444c6ca870b60a/U4Jxr5L5htNaxv02CRiNR.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/638f308fc4444c6ca870b60a/U4Jxr5L5htNaxv02CRiNR.png"/></a></p>
<p><em>figure taken from <a href="https://huggingface.co/papers/2401.07519">InstantID: Zero-shot Identity-Preserving Generation in Seconds</a></em></p>
<p>Using these new zero-shot methods, one can easily generate a self portrait with their choice of style, composition, and background👩🏻🎨</p>
<p>Here are 3 zero-shot pipelines to know and try🚀 </p>
<ol>
<li><a href="https://huggingface.co/spaces/multimodalart/face-to-all">Face-to-all</a></li>
<li><a href="https://huggingface.co/spaces/InstantX/InstantID">InstantID</a></li>
<li><a href="https://huggingface.co/spaces/multimodalart/Ip-Adapter-FaceID">IP Adapter FaceID</a></li>
</ol>
<p><a href="https://huggingface.co/spaces/multimodalart/Ip-Adapter-FaceID">🎭<strong>IP Adapter FaceID</strong>🎭</a></p>
<ul>
<li><a href="https://huggingface.co/papers/2308.06721">📗IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models</a></li>
<li><a href="https://github.com/tencent-ailab/IP-Adapter" rel="nofollow">Code 👩💻</a></li>
<li><a href="https://huggingface.co/spaces/multimodalart/Ip-Adapter-FaceID">Demo 🤗</a></li>
</ul>
<p>IP Adapters consist of 2 core components:</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/638f308fc4444c6ca870b60a/AAFuso-fvot29zyU4GQNc.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/638f308fc4444c6ca870b60a/AAFuso-fvot29zyU4GQNc.png"/></a></p>
<ol>
<li>An <code>image encoder</code> to extract image features (from the reference image/s)</li>
<li><strong>Decoupled cross-attention</strong> layers for text features and image features. A new cross-attention layer is added for each cross-attention layer in the original UNet model to insert image features.
💡To improve face fidelity, in <a href="https://huggingface.co/h94/IP-Adapter-FaceID">IP Adapter FaceID</a>, face embeddings were introduced, instead of (or in addition to in IP Adapter FaceID Plus) to CLIP embeddings.</li>
</ol>
<p><a href="https://huggingface.co/spaces/InstantX/InstantID">🎭<strong>InstantID</strong>🎭</a></p>
<ul>
<li><a href="https://huggingface.co/papers/2401.07519">📘InstantID: Zero-shot Identity-Preserving Generation in Seconds</a></li>
<li><a href="https://github.com/InstantID/InstantID" rel="nofollow">Code 👩💻</a></li>
<li><a href="https://huggingface.co/spaces/InstantX/InstantID">Demo 🤗</a></li>
</ul>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/638f308fc4444c6ca870b60a/DdWnrxg-FC2PkDr5C3zYH.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/638f308fc4444c6ca870b60a/DdWnrxg-FC2PkDr5C3zYH.png"/></a></p>
<p>Similar to IP Adapter, InstantID also makes use of id embeddings and decoupled cross attention, but adds a new component: <strong>Identity Net</strong></p>
<p>💡IdentityNet - an adapted ControlNet - meant to encode the detailed features from the reference facial image with additional spatial control, with 2 main modifications to ControlNet:</p>
<p>❶ Instead of fine-grained OpenPose facial keypoints, only five facial keypoints are used (two for the eyes, one for the nose, and two for the mouth) for conditional input. </p>
<p>❷ Eliminate text prompts and use ID embedding as conditions for cross-attention layers in the ControlNet</p>
<p><a href="https://huggingface.co/spaces/multimodalart/face-to-all">🎭<strong>Face-to-all</strong>🎭</a></p>
<ul>
<li><a href="https://huggingface.co/spaces/multimodalart/face-to-all/tree/main">Code 👩💻</a></li>
<li><a href="https://huggingface.co/spaces/multimodalart/face-to-all">Demo 🤗</a></li>
</ul>
<p>a diffusers 🧨 workflow inspired by <a href="https://huggingface.co/fofr">@fofr</a> Face-to-Many ComfyUI workflow🔥
<a href="https://cdn-uploads.huggingface.co/production/uploads/638f308fc4444c6ca870b60a/Q8rv4zS0MGi5uXx7jFN4Y.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/638f308fc4444c6ca870b60a/Q8rv4zS0MGi5uXx7jFN4Y.png"/></a></p>
<p>This workflow extends the original InstantID pipeline & combines it with <em>any</em> SDXL LoRA:</p>
<ol>
<li>adding the option to <strong>stylize with all style sdxl LoRAs</strong> - especially useful for styles that aren't known to the base diffusion model (browse the <a href="https://huggingface.co/spaces/enzostvs/lora-studio">LoRA Studio</a> for inspo ✨)</li>
<li>improving structure preservation - maintaining the composition of the reference image.</li>
</ol>
|
History of State Space Models (SSM) in 2022 | https://hf.co/blog/lbourdois/ssm-2022 | <strong>Citation</strong> |
What Historical AI Breakthroughs Have Been Unappreciated by The Mainsteam Media? | https://hf.co/blog/Smooke/ai-breakthroughs-unappreciated-by-mainstream-media |
<p>
<i>Recently had the chance to interview <a href="https://twitter.com/jbrowder1" rel="nofollow">Joshua Browder</a>, the Founder/CEO of <a href="https://donotpay.com" rel="nofollow">DoNotPay</a>. <a href="https://hackernoon.com/multimodal-is-the-most-unappreciated-ai-breakthrough-says-donotpayceo-joshua-browder" rel="nofollow">The interview is published in full on HackerNoon</a>, and below is an excerpt that I think aptly summarizes the metoric rise of AI from an entrepeneur in the the thick of it.</i></p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/64862a25cf5ad5e1f0482ef2/Gr-slwIgkO2YK-QM4dusS.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/64862a25cf5ad5e1f0482ef2/Gr-slwIgkO2YK-QM4dusS.png"/></a></p>
<p><b><a href="https://hackernoon.com/about/david" rel="nofollow">David Smooke</a>:</b> Since the launch of ChatGPT, the AI boom’s been in mainstream media’s crosshairs. PyTorch and TensorFlow were monumental achievements that maybe weren’t fully appreciated until something more user friendly was built with/atop/beyond it. What future AI breakthroughs have you excited? And what historical AI breakthroughs do you think have been under-appreciated by the media?</p>
<p><b>Joshua Browder:</b> It feels like we are making years worth of progress every month in A.I. and things that were not possible even last Fall are possible today.</p>
<p>The first major breakthrough was when GPT 3 was coherent enough to hold a conversation. At that point, we built an A.I. that can cancel subscriptions. As you may know, some companies, such as The New York Times, make you chat with an agent, just to cancel a subscription. It felt like magic the first time we canceled a magazine subscription with A.I.</p>
<p>Then came GPT-4. The reasoning functionality for what we were trying to accomplish seemed like an order of magnitude improvement, so it allowed for more sophisticated products. Recently, we launched A.I. bill negotiation, where our robots log in to your utility account (such as Comcast) and start chatting with an agent to get you a discount. In some cases, the big companies are using A.I. (and we are using A.I.), so the two A.I.s are battling it out. With GPT 3, this use case would not have been possible.</p>
<p>Multimodal, where A.I. can accept different types of inputs (such as images), is probably the most unappreciated breakthrough by the media. I don’t think many consumers realize that ChatGPT can “see.” At DoNotPay, we are using GPT-4 vision to assess parking signage, such as when our system prompts GPT-4 to determine: “is a tree covering the sign?”</p>
<p>Latency is still the thing that needs to improve the most. 6 months ago, both the large language models (and the voice models) would take too long to hold a conversation on the phone. For our purposes, a lot of consumer rights disputes get handled over there. It seems we are finally at the point where we can build phone bots to complete tasks on peoples’ behalf, though we still have some minor technical improvements that need to happen.</p>
<p><i>Read the Full Interview: <a href="https://hackernoon.com/multimodal-is-the-most-unappreciated-ai-breakthrough-says-donotpayceo-joshua-browder" rel="nofollow">'Multimodal is the most unappreciated AI breakthrough' says DoNotPay CEO Joshua Browder</a>, <a href="https://hackernoon.com/c/ai" rel="nofollow">AI Blogs on HackerNoon</a>, and <a href="https://app.hackernoon.com/new?search=general-interview&ref=hackernoon.com" rel="nofollow">Get Interviewed by HackerNoon</a>.</i></p>
|
Analysis on evaluating 7 bilions italian LLMs | https://hf.co/blog/giux78/analysis-on-ita-llm | Conclusion |
DS-MoE: Making MoE Models More Efficient and Less Memory-Intensive | https://hf.co/blog/bpan/ds-moe |
<p>
<em>Estimated reading time: 4 minutes</em> </p>
<p>Mixture-of-Experts (MoE) language models are known for their ability to reduce computing needs by 2 to 4 times compared to traditional dense models, without sacrificing performance. This makes them especially useful in situations where computing resources are limited. However, MoE models typically need 2 to 4 times more parameters to perform as well as a dense model. For example, models like DeepSeekMoE-16B and Qwen1.5-MoE-A2.7B which has 16B parameters were designed to match the performance of a 7B model. The large number of parameters in MoE models incurs larger GPU memory requirements which makes them less efficient in I/O-bounded scenarios like autoregressive generation.</p>
<figure style="text-align: center;">
<img alt="Alternative Text" height="450" src="https://cdn-uploads.huggingface.co/production/uploads/640e3a753830fd441c2c768d/POhcBNq6Gj_rHzp6a8H_h.png" style="margin: 0 auto; display: block;" width="400"/>
<figcaption style="text-align: center;">Figure 1: Decoding Throughput of Dense Models and the SMoE Models with Similar Performance. We test the performance under the setup where only the input length is 1 and output length is 512. This study reveals that traditional Sparse Mixture of Experts (SMoE) models exhibit reduced output throughput in I/O-bounded situations despite their lower computational demands. The models are tested with HuggingFace Transformers.</figcaption>
</figure>
<p>Is it necessary for MoE models to be so large to achieve high performance? Can we create an MoE model that maintains performance but uses fewer parameters and less computational power? Enter DS-MoE. This model achieves similar performance to dense models but uses about one-third of the computational resources and only half as many parameters as other MoE models.</p>
<figure style="text-align: center;">
<img alt="Alternative Text" height="450" src="https://cdn-uploads.huggingface.co/production/uploads/640e3a753830fd441c2c768d/-cktt05v8p7-U-L33rIgs.png" style="margin: 0 auto; display: block;" width="400"/>
<figcaption style="text-align: center;">Figure 2: Number of Parameters for Performance-Matched Models. We plot the size and computational profiles of the Dense-3B, SMoE-5B, and DS-MoE-3B models trained with 100B tokens, each achieving a comparable averaged task performance. DS-MoE demonstrates both computational efficiency and parameter efficiency, where the computational cost is quantified by counting the number of active parameters engaged during inference.</figcaption>
</figure>
<p>The concept of DS-MoE involves densely training the experts and forcing the model's routers to gradually ignore unnecessary experts for a given token. We employ the Mutual Information (MI) loss to the training process, which balances the load of each expert across the entire batch, but also encourages each input token to concentrate their gating probability to fewer experts. </p>
<figure style="text-align: center;">
<img alt="Alternative Text" src="https://cdn-uploads.huggingface.co/production/uploads/640e3a753830fd441c2c768d/dCzK6pAcxQDsb-70uPDIe.png" style="margin: 0 auto; display: block;"/>
<figcaption style="text-align: center;">Figure 3: Subfigure (a) illustrates the conventional sparse training method in MoE models, characterized by sparse gradient propagation in both the router and the experts. Subfigure (b) details the dense training strategy in DS-MoE, which involves dense propagation of gradients for both routers and experts.</figcaption>
</figure>
<p>The MI loss is defined as
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math display="block" xmlns="http://www.w3.org/1998/Math/MathML"><mrow><msub><mi>L</mi><mrow><mi>M</mi><mi>I</mi></mrow></msub><mo>=</mo><mo>−</mo><mi>H</mi><mo stretchy="false">(</mo><mi>e</mi><mo stretchy="false">)</mo><mo>+</mo><mfrac><mn>1</mn><mrow><mi mathvariant="normal">∣</mi><mi>X</mi><mi mathvariant="normal">∣</mi></mrow></mfrac><munder><mo>∑</mo><mrow><mi>x</mi><mo>∈</mo><mi>X</mi></mrow></munder><mi>H</mi><mo stretchy="false">(</mo><mi>e</mi><mi mathvariant="normal">∣</mi><mi>x</mi><mo stretchy="false">)</mo><mo separator="true">,</mo><mspace width="1em"></mspace><mi>H</mi><mo stretchy="false">(</mo><mi>e</mi><mo stretchy="false">)</mo><mo>=</mo><mo>−</mo><munderover><mo>∑</mo><mrow><mi>i</mi><mo>=</mo><mn>1</mn></mrow><mi>N</mi></munderover><mi>p</mi><mo stretchy="false">(</mo><mi>e</mi><mo stretchy="false">)</mo><mi>log</mi><mo></mo><mo stretchy="false">(</mo><mi>p</mi><mo stretchy="false">(</mo><mi>e</mi><mo stretchy="false">)</mo><mo stretchy="false">)</mo><mo separator="true">,</mo></mrow> L_{MI} = - H(e) + {1 \over |X|}\sum_{x\in X} H(e|x), \quad H(e) = - \sum_{i=1}^{N}p(e)\log(p(e)),</math></span><span aria-hidden="true" class="katex-html"><span class="base"><span class="strut" style="height:0.8333em;vertical-align:-0.15em;"></span><span class="mord"><span class="mord mathnormal">L</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3283em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathnormal mtight" style="margin-right:0.10903em;">M</span><span class="mord mathnormal mtight" style="margin-right:0.07847em;">I</span></span></span></span></span><span class="vlist-s"></span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:1em;vertical-align:-0.25em;"></span><span class="mord">−</span><span class="mord mathnormal" style="margin-right:0.08125em;">H</span><span class="mopen">(</span><span class="mord mathnormal">e</span><span class="mclose">)</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:2.6431em;vertical-align:-1.3217em;"></span><span class="mord"><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.3214em;"><span style="top:-2.314em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">∣</span><span class="mord mathnormal" style="margin-right:0.07847em;">X</span><span class="mord">∣</span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.677em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s"></span></span><span class="vlist-r"><span class="vlist" style="height:0.936em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span></span><span class="mspace" style="margin-right:0.1667em;"></span><span class="mop op-limits"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.05em;"><span style="top:-1.8557em;margin-left:0em;"><span class="pstrut" style="height:3.05em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathnormal mtight">x</span><span class="mrel mtight">∈</span><span class="mord mathnormal mtight" style="margin-right:0.07847em;">X</span></span></span></span><span style="top:-3.05em;"><span class="pstrut" style="height:3.05em;"></span><span><span class="mop op-symbol large-op">∑</span></span></span></span><span class="vlist-s"></span></span><span class="vlist-r"><span class="vlist" style="height:1.3217em;"><span></span></span></span></span></span><span class="mspace" style="margin-right:0.1667em;"></span><span class="mord mathnormal" style="margin-right:0.08125em;">H</span><span class="mopen">(</span><span class="mord mathnormal">e</span><span class="mord">∣</span><span class="mord mathnormal">x</span><span class="mclose">)</span><span class="mpunct">,</span><span class="mspace" style="margin-right:1em;"></span><span class="mspace" style="margin-right:0.1667em;"></span><span class="mord mathnormal" style="margin-right:0.08125em;">H</span><span class="mopen">(</span><span class="mord mathnormal">e</span><span class="mclose">)</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:3.106em;vertical-align:-1.2777em;"></span><span class="mord">−</span><span class="mspace" style="margin-right:0.1667em;"></span><span class="mop op-limits"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.8283em;"><span style="top:-1.8723em;margin-left:0em;"><span class="pstrut" style="height:3.05em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathnormal mtight">i</span><span class="mrel mtight">=</span><span class="mord mtight">1</span></span></span></span><span style="top:-3.05em;"><span class="pstrut" style="height:3.05em;"></span><span><span class="mop op-symbol large-op">∑</span></span></span><span style="top:-4.3em;margin-left:0em;"><span class="pstrut" style="height:3.05em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathnormal mtight" style="margin-right:0.10903em;">N</span></span></span></span></span><span class="vlist-s"></span></span><span class="vlist-r"><span class="vlist" style="height:1.2777em;"><span></span></span></span></span></span><span class="mspace" style="margin-right:0.1667em;"></span><span class="mord mathnormal">p</span><span class="mopen">(</span><span class="mord mathnormal">e</span><span class="mclose">)</span><span class="mspace" style="margin-right:0.1667em;"></span><span class="mop">lo<span style="margin-right:0.01389em;">g</span></span><span class="mopen">(</span><span class="mord mathnormal">p</span><span class="mopen">(</span><span class="mord mathnormal">e</span><span class="mclose">))</span><span class="mpunct">,</span></span></span></span></span>
where <strong>X</strong> denotes the tokens in a minibatch, and <strong>e</strong> denotes the experts. Intuitively, maximizing H(e) balances the load of each expert across the entire batch, and minimizing H(e|x) encourages each input <strong>x</strong> to concentrate their gating probability to fewer experts. </p>
<p>During inference, DS-MoE chooses only the top K experts based on their scores. The determination of the number of K is based on either a predefined value or an adaptive method, contingent upon the count of experts with scores surpassing a certain threshold. As a result, DS-MoE can perform as well as similarly sized dense models while using far fewer active parameters, as demonstrated in the table.</p>
<div class="max-w-full overflow-auto">
<table>
<thead><tr>
<th>Model</th>
<th>HellaSwag</th>
<th>PIQA</th>
<th>WinoGrande</th>
<th>SciQ</th>
<th>Arc-e</th>
<th>Arc-c</th>
<th>Avg. Perf.</th>
<th>Active Params</th>
</tr>
</thead><tbody><tr>
<td>Dense-3B</td>
<td>40.4</td>
<td>71.4</td>
<td>58.7</td>
<td>86.0</td>
<td>59.6</td>
<td>26.1</td>
<td>57.0</td>
<td>2705M</td>
</tr>
<tr>
<td>SMoE-5B</td>
<td>40.1</td>
<td>70.7</td>
<td>56.5</td>
<td>85.6</td>
<td>58.4</td>
<td>24.8</td>
<td>56.0</td>
<td>1212M</td>
</tr>
<tr>
<td>DS-MoE-3B</td>
<td>39.3</td>
<td>71.6</td>
<td>57.9</td>
<td>85.6</td>
<td>57.7</td>
<td>24.9</td>
<td>56.2</td>
<td>934M</td>
</tr>
<tr>
<td>Dense-6B</td>
<td>44.3</td>
<td>72.2</td>
<td>59.9</td>
<td>88.0</td>
<td>62.9</td>
<td>27.9</td>
<td>59.2</td>
<td>6186M</td>
</tr>
<tr>
<td>DS-MoE-6B</td>
<td>43.5</td>
<td>73.0</td>
<td>57.9</td>
<td>86.9</td>
<td>61.9</td>
<td>27.9</td>
<td>58.5</td>
<td>1813M</td>
</tr>
</tbody>
</table>
</div>
<p>We also tested DS-MoE with vLLM to see how it compares to other models in terms of processing speed and memory usage at the 7B performance tier. We looked at how many requests and tokens it could handle per second, using a setup where each input and output consisted of 1,000 tokens and the GPU memory usage was capped at 90%.</p>
<div class="max-w-full overflow-auto">
<table>
<thead><tr>
<th>Model</th>
<th>Total Params</th>
<th>Active Params</th>
<th>Model Memory</th>
<th>A100 Throughput</th>
<th>A100 TPS</th>
<th>H100 Throughput</th>
<th>H100 TPS</th>
</tr>
</thead><tbody><tr>
<td>Dense-6B</td>
<td>6.4B</td>
<td>6.4B</td>
<td>12.3 GiB</td>
<td>1.04</td>
<td>2079.8</td>
<td>1.40</td>
<td>2808.7</td>
</tr>
<tr>
<td>Mistral-7B</td>
<td>7.2B</td>
<td>7.2B</td>
<td>13.5 GiB</td>
<td>1.07</td>
<td>2140.8</td>
<td>1.52</td>
<td>3047.4</td>
</tr>
<tr>
<td>DeepSeekMoE</td>
<td>17.3B</td>
<td>2.8B</td>
<td>30.5 GiB</td>
<td>1.17</td>
<td>2330.1</td>
<td>1.57</td>
<td>3144.1</td>
</tr>
<tr>
<td>Qwen1.5-MoE</td>
<td>16.4B</td>
<td>2.7B</td>
<td>26.7 GiB</td>
<td>1.33</td>
<td>2665.7</td>
<td>1.81</td>
<td>3616.9</td>
</tr>
<tr>
<td>DS-MoE-6B</td>
<td>6.5B</td>
<td>2.2B</td>
<td>12.6 GiB</td>
<td>2.00</td>
<td>3992.8</td>
<td>2.30</td>
<td>4603.9</td>
</tr>
</tbody>
</table>
</div>
<p>The test shows that DS-MoE outperforms both dense models in terms of computational cost and sparsely trained MoEs in model memory, leading to faster processing in the computation-bounded scenarios as well the I/O bounded scenarios. Note that DS-MoE-6B is not yet comparable with other models regarding downstream performance, due to its training on merely 100 billion tokens (versus the trillions for other models). Nevertheless, DS-MoE has demonstrated significant promise in achieving the performance levels of dense models with a comparable volume of training data.</p>
<p><a href="https://arxiv.org/pdf/2404.05567.pdf" rel="nofollow"><em>Read More in the Paper</em></a></p>
|
RAG Empowerment: Cohere C4AI Command-R and Transformers Unveiled | https://hf.co/blog/Andyrasika/command-r-transformer | Conclusion |
🐦 The IBIS Challenge | https://hf.co/blog/nikgr/the-ibis-challenge |
<p>
Join <strong>the IBIS Challenge</strong>: an open competition in <strong>I</strong>nferring and predicting transcription factor <strong>Bi</strong>nding <strong>S</strong>pecificities.</p>
<p>Deciphering human gene regulation is a cornerstone of <em>modern molecular biology and biomedicine</em>. On the regulatory sequence level, the grammar of the gene regulation is defined by the binding specificities of special proteins, <em>the transcription factors</em>, which act at particular "genomic addresses" by recognizing 🧬 DNA sequence patterns in gene regulatory regions. Drawing inspiration from <em>DREAM and Kaggle competitions</em>, <strong>we invite you to join IBIS</strong> (<a href="https://ibis.autosome.org/" rel="nofollow">ibis.autosome.org</a>), an open challenge in computational sequence analysis for Inferring Binding Specificities of human transcription factors with classic bioinformatics and advanced machine learning (ML).</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/66060bf3cb0bd478ab559952/dFh5l6SgZ9CeljMuOSXVH.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/66060bf3cb0bd478ab559952/dFh5l6SgZ9CeljMuOSXVH.png"/></a></p>
<p>IBIS aims at a fair assessment of <strong>existing and novel methods solving the long-standing problem of DNA motif discovery: identifying and modeling recurrent DNA text patterns</strong> recognized by human transcription factors. In IBIS, we will assess classic methods as well as diverse modern approaches of arbitrary complexity. </p>
<p>🚀 Those include but are not limited to <strong>decision trees</strong> on top of k-mer frequencies, hidden markov models (<strong>HMMs</strong>), convolutional neural networks (<strong>CNNs</strong>), recurrent neural networks (<strong>RNNs</strong>), long short-term memory (<strong>LSTM</strong>) models, as well as <strong>attention</strong> and <strong>transformer-based models</strong>.</p>
<p>💡 <strong>IBIS allows arbitrary usage of the human genome or random DNA sequences to pre-train</strong> an artificial neural network <strong>or to extract features</strong> if performed from scratch. Particularly we allow using:</p>
<ul>
<li><strong>hg38 human genome</strong> assembly (including any existing <em>DNA language models pretrained solely on the genome sequence</em>);</li>
<li>precomputed biophysical features derived from the DNA sequences, such as the <strong>DNA shape features</strong>;</li>
<li>RepeatMasker track;</li>
<li><strong>protein-level metadata on transcription factors</strong> (including but not limited to protein sequence and domain information) available directly in UniProt (so, in theory, you can demonstrate <em>the power of pre-training on protein sequences</em>).
More details can be found in the <a href="https://ibis.autosome.org/docs/technical_details" rel="nofollow">IBIS documentation</a>.</li>
</ul>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/66060bf3cb0bd478ab559952/XU8nz5Xcbgca1Jg3CzZyq.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/66060bf3cb0bd478ab559952/XU8nz5Xcbgca1Jg3CzZyq.png"/></a></p>
<p>📊 To solve the challenge problem, IBIS provides a <strong>diverse array of unpublished experimental data on 40 human regulatory proteins</strong>, many of which remain unexplored in terms of preferred DNA binding patterns. </p>
<p>The challenge proceeds in two stages: <em>the online Leaderboard</em> (10 transcription factors) and <em>the offline Final</em> round (the remaining 30 transcription factors). Winners will be announced separately for each of the stages. <strong>🏆 The best methods of both stages will be highlighted in the post-challenge high-impact scientific paper, while the winners of the Primary track of the Final round will be invited to contribute as co-authors.</strong></p>
<ul>
<li>Learn more at <a href="https://ibis.autosome.org/" rel="nofollow">the IBIS Challenge website</a></li>
<li>Read <a href="https://twitter.com/halfacrocodile/status/1767284083632095646" rel="nofollow">The IBIS Challenge Twitter Thread</a></li>
<li>Organizers: <a href="https://ibis.autosome.org/docs/about_us" rel="nofollow">GRECO-BIT & Codebook consortia</a></li>
</ul>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/66060bf3cb0bd478ab559952/xSH3CQzfiPNAWl4XRvlCj.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/66060bf3cb0bd478ab559952/xSH3CQzfiPNAWl4XRvlCj.png"/></a></p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/66060bf3cb0bd478ab559952/MaZJIPDN5Vap-dVTz6gYa.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/66060bf3cb0bd478ab559952/MaZJIPDN5Vap-dVTz6gYa.png"/></a></p>
|
The LASER technique: Evaluating SVD compression | https://hf.co/blog/fractalego/mistral-laser-svd | Citation |
Open Source All About Data Processing, Dataverse | https://hf.co/blog/EujeongChoi/dataverse-opensource-for-data-processing | 4. Future Work and Contribution Points |
Many-shot jailbreaking | https://hf.co/blog/vladbogo/many-shot-jailbreaking | Conclusion |
Aurora-M: The First Open Source Biden-Harris Executive Order Red teamed Multilingual Language Model | https://hf.co/blog/mayank-mishra/aurora | Conclusion |
Gecko: Versatile Text Embeddings Distilled from Large Language Models | https://hf.co/blog/vladbogo/gecko | Conclusion |
Finetune Mixtral 8x7B with AutoTrain | https://hf.co/blog/abhishek/autotrain-mixtral-dgx-cloud-local |
<p>
In this blog, I'll show you how you can fine-tune Mixtral 8x7B on your own dataset using <a href="https://github.com/huggingface/autotrain-advanced" rel="nofollow">AutoTrain</a>. The amount of coding used in this blog post will be quite small. We will be writing <em>zero</em> lines of code! </p>
<p>Since Mixtral is quite large model, it requires quite large hardware to finetune on. For this post, we will be using the latest offering of Hugging Face: Train on DGX Cloud. However, note that, you can use the process followed in this blog post and train on your own hardware (or other cloud providers) too! Steps to train locally/custom hardware are also provided in this blog.</p>
<p>NOTE: Train on DGX cloud is only available for Enterprises</p>
<p>To finetune mixtral-8x7b instruct on your custom dataset, you can click <a href="https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1">here</a> and then click on the train button, you will be shown a few options, you need to select "NVIDIA DGX cloud". </p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/5fa19f4ba13e063b8b2b5e11/q9mMztPLmjJ_p8LRtLbhx.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/5fa19f4ba13e063b8b2b5e11/q9mMztPLmjJ_p8LRtLbhx.png"/></a></p>
<p>Once done, an AutoTrain space will be created for you, where you can upload your data, select parameters and start training.</p>
<p>If running locally, all you have to do is install AutoTrain and start the app:</p>
<pre><code class="language-bash">$ pip install -U autotrain-advanced
$ <span class="hljs-built_in">export</span> HF_TOKEN=your_huggingface_write_token
$ autotrain app --host 127.0.0.1 --port 8080
</code></pre>
<p>Once done, you can go to your browser, 127.0.0.1:8080 and now you are ready to finetune locally. </p>
<p>If running on DGX Cloud, you will see the option to choose 8xH100 in the hardware dropdown. This dropdown will be disabled if running locally:</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/5fa19f4ba13e063b8b2b5e11/5hAT91mN27lgrl55Fs-sz.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/5fa19f4ba13e063b8b2b5e11/5hAT91mN27lgrl55Fs-sz.png"/></a></p>
<p>As you can see, the AutoTrain UI offers a lot of options for different types of tasks, datasets and parameters. One can train almost any kind of model on their own dataset using AutoTrain 💥 If you are an advanced user and want to tune more paramters all you have to do is click on <code>Full</code> under the training parameters! </p>
<p>The more the parameters, the more confusing it is for end-users. Today, we are limiting ourselves to basic parameters. 99% of the time, basic parameters are all you need to adjust to get a model that performs amazingly well in the end 😉 Offering more than what's required is just confusing for end-users.</p>
<p>Today, we are selecting the <code>no_robots</code> dataset from the Hugging Face H4 team. You can see the dataset <a href="https://huggingface.co/datasets/HuggingFaceH4/no_robots">here</a>. This is how one of the samples of the dataset looks like:</p>
<pre><code>[ { "content": "Please summarize the goals for scientists in this text:\n\nWithin three days, the intertwined cup nest of grasses was complete, featuring a canopy of overhanging grasses to conceal it. And decades later, it served as Rinkert’s portal to the past inside the California Academy of Sciences. Information gleaned from such nests, woven long ago from species in plant communities called transitional habitat, could help restore the shoreline in the future. Transitional habitat has nearly disappeared from the San Francisco Bay, and scientists need a clearer picture of its original species composition—which was never properly documented. With that insight, conservation research groups like the San Francisco Bay Bird Observatory can help guide best practices when restoring the native habitat that has long served as critical refuge for imperiled birds and animals as adjacent marshes flood more with rising sea levels. “We can’t ask restoration ecologists to plant nonnative species or to just take their best guess and throw things out there,” says Rinkert.", "role": "user" }, { "content": "Scientists are studying nests hoping to learn about transitional habitats that could help restore the shoreline of San Francisco Bay.", "role": "assistant" } ]
</code></pre>
<p>This dataset is pretty much the standard for SFT training. If you want to train your own conversational bot on your custom dataset, this would be the format to follow! 🤗 Thanks to the H4 team! </p>
<p>Now, we have the dataset and AutoTrain UI up and running. All we need to do now is point the UI to the dataset, adjust the parameters and click on "Start" button. Here's a view of the UI right before clicking on "Start" button.</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/5fa19f4ba13e063b8b2b5e11/9kraGk9w5QZlyGwJ3RctN.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/5fa19f4ba13e063b8b2b5e11/9kraGk9w5QZlyGwJ3RctN.png"/></a></p>
<p>We chose Hugging Face hub dataset and changed the following:</p>
<ul>
<li>dataset name: <code>HuggingFaceH4/no_robots</code></li>
<li>train split: <code>train_sft</code>, this is how split is named in this specific dataset.</li>
<li>column mapping: <code>{"text": "messages"}</code>. this maps autotrain's text column to the text column in the dataset which is <code>messages</code> in our case</li>
</ul>
<p>For parameters, the following worked well:</p>
<pre><code>{
"block_size": 1024,
"model_max_length": 2048,
"mixed_precision": "bf16",
"lr": 0.00003,
"epochs": 3,
"batch_size": 2,
"gradient_accumulation": 4,
"optimizer": "adamw_bnb_8bit",
"scheduler": "linear",
"chat_template": "zephyr",
"target_modules": "all-linear",
"peft": false
}
</code></pre>
<p>Here we are using adamw_bnb_8bit optimizer and zephyr chat template. Depending on your dataset, you can use <code>zephyr</code>, <code>chatml</code> or <code>tokenizer</code> chat template. Or you can set it to <code>none</code> and format the data the way you like before uploading to AutoTrain: possibilities are endless and infinite.</p>
<p>Note that we are not using quantization for this specific model, PEFT has been disabled 💥</p>
<p>Once done, click on the "Start" button, grab a coffee and relax. </p>
<p>When I tried this with the same params and dataset on 8xH100, the training took ~45mins (3 epochs) and my model was pushed to hub as a private model for me to try instantly 🚀 If you want, you can take a look at the trained model <a href="https://huggingface.co/abhishek/autotrain-xva0j-mixtral8x7b">here</a>.</p>
<p>Great! So, we finetuned mixtral 8x7b instruct model on our own custom dataset and the model is ready to be deployed using Hugging Face's inference endpoints. </p>
<p>BONUS: if you like CLI, here is the command to run:</p>
<pre><code>autotrain llm \
--train \
--trainer sft \
--model mistralai/Mixtral-8x7B-Instruct-v0.1 \
--data-path HuggingFaceH4/no_robots \
--train-split train_sft \
--text-column messages \
--chat-template zephyr \
--mixed-precision bf16 \
--lr 2e-5 \
--optimizer adamw_bnb_8bit \
--scheduler linear \
--batch-size 2 \
--epochs 3 \
--gradient-accumulation 4 \
--block-size 1024 \
--max-length 2048 \
--padding right \
--project-name autotrain-xva0j-mixtral8x7b
--username abhishek \
--push-to-hub
</code></pre>
<p>In case of questions, reach out at <a href="mailto:[email protected]" rel="nofollow">[email protected]</a> or on twitter: @abhi1thakur</p>
|
How do Textual Inversion tokens destroy prompts? | https://hf.co/blog/Isamu136/textual-inversion-prompt-destruction | Conclusion and Future Direction |
Experiments with Bitnet 1.5 (~ngmi~) | https://hf.co/blog/joey00072/experiments-with-bitnet-1-5 | Training code |
Create Mixtures of Experts with MergeKit | https://hf.co/blog/mlabonne/frankenmoe | References |
Elevate Responses: RAG with LlamaIndex & MongoDB | https://hf.co/blog/Andyrasika/mongodb-llamaindex-rag | Conclusion |
Samantha Mistral Instruct 7b - Comprehensive Bulleted Notes | https://hf.co/blog/cognitivetech/samantha-mistral-instruct-7b-bulleted-notes | Thanks |
Policy Questions Blog 1: AI Data Transparency Remarks for NAIAC Panel 📚🔍⚖️ | https://hf.co/blog/yjernite/naiac-data-transparency | A Minimum Standard for Meaningful Data Disclosure |
Protein similarity and Matryoshka embeddings | https://hf.co/blog/monsoon-nlp/proteins-matryoshka-embeddings | 🦠🧬🤖🪆 Future Thoughts |
A brief analysis of automerger data, feat. SLERP and DARE-TIES LLM merging | https://hf.co/blog/kgourgou/a-first-look-at-automerger-data | To sum up |
Data exploration and filtering with Nomic Atlas | https://hf.co/blog/visheratin/nomic-data-cleaning | Conclusion |
Giskard Bot: Identifying robustness, performance and ethical vulnerabilities in the Top 10 Most Popular Hugging Face Models | https://hf.co/blog/JMJM/vulnerabilities-top-10-hf-models | Conclusion |
Releasing Common Corpus: the largest public domain dataset for training LLMs | https://hf.co/blog/Pclanglais/common-corpus |
<p>
We announce today the release of <a href="https://huggingface.co/collections/PleIAs/common-corpus-65d46e3ea3980fdcd66a5613">Common Corpus</a> on HuggingFace:</p>
<ul>
<li>Common Corpus is the largest public domain dataset released for training LLMs.</li>
<li>Common Corpus includes 500 billion words from a wide diversity of cultural heritage initiatives.</li>
<li>Common Corpus is multilingual and the largest corpus to date in English, French, Dutch, Spanish, German and Italian.</li>
<li>Common Corpus shows it is possible to train fully open LLMs on sources without copyright concerns.</li>
</ul>
<p>Common Corpus is an international initiative coordinated by <a href="/blog/Pclanglais/pleias.fr">Pleias</a>, involving researchers in LLM pretraining, AI ethics and cultural heritage like , in association with major organizations committed to an open science approach for AI (HuggingFace, <a href="https://occiglot.github.io" rel="nofollow">Occiglot</a>, <a href="https://www.eleuther.ai/" rel="nofollow">Eleuther</a>, <a href="https://www.nomic.ai/" rel="nofollow">Nomic AI</a>). Common Corpus has received the support of Lang:IA, a state start-up supported by the French Ministry of Culture and the Direction du numérique (<a href="https://huggingface.co/AgentPublic">Agent Public</a>. Pleias is a French start-up specialized in the training of Large Language Models for document processing on fully open and auditable corpus.</p>
<p>Contrary to what most large AI companies claim, the release of Common Corpus aims to show it is possible to train Large Language Model on fully open and reproducible corpus, without using copyright content. This is only an initial part of what we have collected so far, in part due to the lengthy process of copyright duration verification. In the following weeks and months, we’ll continue to publish many additional datasets also coming from other open sources, such as open data or open science.</p>
<p>CommonCorpus holds the largest English-speaking dataset to date with 180 billion words. Thi includes a major US collection of 21 millions digitized newspapers, Chronicling America that can also be fully explored with an <a href="https://atlas.nomic.ai/data/aaron/pdnews-21286k-tr2k-addmeta/map" rel="nofollow">original corpus map</a> created by Nomic AI, as well as large monographs datasets collected by digital historian Sebastian Majstorovic.</p>
<p>Common Corpus is also multilingual. It also incorporates the largest open dataset to date in French (110 billion words), <a href="https://huggingface.co/datasets/PleIAs/German-PD">German</a> (30 billion words), <a href="https://huggingface.co/datasets/PleIAs/Spanish-PD-Books">Spanish</a>, <a href="https://huggingface.co/datasets/PleIAs/Dutch-PD">Dutch</a> or <a href="https://huggingface.co/datasets/PleIAs/Italian-PD">Italian</a>, as well as a very long tail of low resource languages that are currently hardly represented in the training of Large Language Model.</p>
<p>Common Corpus is not only open but more qualitative and diverse than the web archive dataset commonly used for pretraining. It includes millions of books with reasoning-rich content which makes it ideal for creating models with long context.</p>
<p>Common Corpus is the start of a long work in progress. Many things remain to be done to achieve this end and to enhance this collection. We aim to support a strong data commons for AI to ease research and make it more reproducible, but also to make AI more accessible, diverse and democratic, by ensuring that anyone can have a look into the large models.</p>
|
What's Automatic Differentiation? | https://hf.co/blog/andmholm/what-is-automatic-differentiation | Personal |
Dive Deeper into Yi-9B | https://hf.co/blog/lorinma/yi-9b-divedeep | 📌 Related Resources |
Sparse Mixture of Experts Language Model from Scratch: Extending makeMoE with Expert Capacity | https://hf.co/blog/AviSoori1x/makemoe2 | Why is Expert Capacity even important? |
VideoMamba: State Space Model for Efficient Video Understanding | https://hf.co/blog/vladbogo/video-mamba | Conclusion |
Better RAG 3: The text is your friend | https://hf.co/blog/hrishioa/retrieval-augmented-generation-3-structure | Conclusion |