title
stringlengths 4
172
| link
stringlengths 27
86
| article
stringlengths 4
40.1k
|
---|---|---|
Detecting the Deceptive: Unmasking Deep Fake Voices | https://hf.co/blog/Andyrasika/deepfake-detect | Conclusion |
AutoTrain Advanced now supports Experiment Tracking | https://hf.co/blog/rishiraj/log-autotrain | Conclusion |
Hearing is Believing: Revolutionizing AI with Audio Classification via Computer Vision | https://hf.co/blog/Andyrasika/voice-with-vision | Conclusion: |
Next token prediction with GPT | https://hf.co/blog/alonsosilva/nexttokenprediction | Next token prediction |
What kind of data lake do we need in the Big Model era? | https://hf.co/blog/lakesoul/what-kind-of-data-lake-do-we-need-in-the-big-model |
<p>
In the context of the era of large models, big data and AI are undoubtedly the two most important technical ecosystems. However, the technical ecosystems of big data and AI exhibit significant division in many aspects. This division is particularly prominent in terms of storage, format, process, framework, and platform, which often poses significant challenges for developers when implementing end-to-end data processing and AI workflows.
Therefore, as an open-source data lakehouse project called LakeSoul, we are committed to seeking new solutions to more effectively integrate big data and AI, bridging the gap between them. We have adopted an integrated approach of Data + AI to enable users to seamlessly connect data processing with AI model applications, facilitating bidirectional interaction between data lakes and large-scale AI models.</p>
<p>1.Perfect combination</p>
<p>1.1 Data+AI integration design</p>
<p>LakeSoul technology architecture successfully achieves the perfect combination of Java big data ecosystem and Python AI ecosystem, enabling support for training and reasoning in various AI frameworks. Additionally, the LakeSoul framework provides a comprehensive suite of solutions with broad applicability due to its powerful data management and computing capabilities. It supports big data computing engines such as Spark, Flink, Presto, etc., catering to common needs in stream processing, batch computing, and BI analysis. Furthermore, LakeSoul seamlessly integrates with AI and data science computing frameworks like PyTorch, Pandas, HuggingFace, Ray.</p>
<p>1.2 Provide a solid data foundation for AI models</p>
<p>The LakeSoul architecture, with its efficient and stable data processing capabilities, can easily handle terabytes of large-scale data, whether structured or unstructured. This ability is crucial to the training and reasoning of large models, because to improve the reasoning effect of large models, it is necessary to support a large number of training data. In addition, LakeSoul architecture's high-performance Native IO design can ensure the efficiency of large model training, further enhancing its competitiveness in the field of AI.
In the early stage of large-scale model training, it is usually necessary to carry out strict data screening and cleaning. This process involves a lot of ETL work, and needs to rely on big data systems such as Spark for data pre-processing, including new data writing, data de-duplication, dirty data cleaning, etc., and requires multiple rounds of model training iterations for different data versions. LakeSoul's all-in-one design, with its partitioning, snapshot, and incremental read and write capabilities, accelerates the pace of model iteration.</p>
<p>1.3 The potential to unlock multimodal data easily</p>
<p>LakeSoul architecture has the ability to process unstructured data, including multimodal data such as text, image, audio and video, and these rich data resources can be used to train multimodal AI large models such as Bert, CLIP, GPT, Stable Difusion, etc. This feature of the LakeSoul architecture increases the potential for multimodal data release.</p>
<p>2.Application cases</p>
<p>The AI modeling process based on LakeSoul is shown in the figure below. LakeSoul has the ability to process both stream and batch data at the same time, and supports sample pre-processing on a data lake. With Native IO, LakeSoul can directly connect AI models. In the following part, we will introduce in detail how to improve the whole process of model training, reasoning and application by relying on lakesoul Lake storage platform and seamless integration of AI ecology such as Pytorch/HuggingFace. We've posted the full code on GitHub, which you can access at the following link:
<a href="https://github.com/lakesoul-io/LakeSoul/python/examples" rel="nofollow">https://github.com/lakesoul-io/LakeSoul/python/examples</a></p>
<p>2.1 Start with a binary classification problem</p>
<p>Here we start with the classic case of Kaggle Titanic. In this case, we conducted modeling based on LakeSoul+PyTorch. Binary classification problem is widely used in the industry, such as the ranking of advertising and recommendation system, loan overdue estimation and other problems can be modeled into binary classification problem theoretically. Here, our main work is divided into three stages to illustrate:
1.At the stage of data entering the lake, the original data is imported into LakeSoul:
<a href="https://cdn-uploads.huggingface.co/production/uploads/6501873983de015e088440ee/4vtDoJb6FiizRSQQCvpsV.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/6501873983de015e088440ee/4vtDoJb6FiizRSQQCvpsV.png"/></a>
2.In the data processing stage, we carried out feature engineering on the LakeSoul platform, including One-Hot coding, feature derivation, normalization and other operations for category features:</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/6501873983de015e088440ee/JXjo0hVfBlQNx_CFUIi8k.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/6501873983de015e088440ee/JXjo0hVfBlQNx_CFUIi8k.png"/></a></p>
<p>3.In the model training stage, we introduced the 3-layer neural network model written by PyTorch for training and verification:</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/6501873983de015e088440ee/bxcJYI4bLHdF1rQqTOnYe.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/6501873983de015e088440ee/bxcJYI4bLHdF1rQqTOnYe.png"/></a></p>
<p>Although we are dealing with static data sets in this case, LakeSoul's design philosophy and technical architecture can support real-time updating of data, real-time updating of features, and online learning of models. This case demonstrates LakeSoul's ability to handle large-scale data set computation, feature engineering, and support AI model training and validation.</p>
<p>2.2 NLP pre-training model fine-tuning</p>
<p>The previous example of Titanic has explained the process of "data entering the lake -> preprocessing -> model training". Below, a model with emotional tendency is trained by IMDB data set. Shows how to fine-tune the Trainer API based on the Bert model (distilbert-base-uncased) through HuggingFace.
Compare the IMDB example provided by the original HuggingFace: <a href="https://huggingface.co/docs/transformers/tasks/sequence_classification">https://huggingface.co/docs/transformers/tasks/sequence_classification</a>
We just need to get the data source stored on LakeSoul through the Iteratable Dataset in the code and make some adjustments on the TrainingArguments, but most of the rest of the code remains basically unchanged:</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/6501873983de015e088440ee/lJs1Q7fOOMZJ9babeUHPk.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/6501873983de015e088440ee/lJs1Q7fOOMZJ9babeUHPk.png"/></a></p>
<p>2.3CLIP based text and text search</p>
<p>The previous IMDB example has shown how to train a model using the LakeSoul and HuggingFace Trainer apis. In this case, we will use the Food 101 data set to show how to use CLIP model to reason samples and realize the function of picture and text search. The processing process mainly includes two stages:
1. Model reasoning: CLIP model on HuggingFace (Clip-VIT-B-32-Multilingual v1) is introduced, where clip model is used to reason on images in the image data set and the Embedding of each image is generated:</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/6501873983de015e088440ee/6Jj64YYdOmhh78e-_j0gd.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/6501873983de015e088440ee/6Jj64YYdOmhh78e-_j0gd.png"/></a></p>
<p>2.Semantic search: Specifically, the user can enter a text description, and the system returns the most matched image by calculating the vector distance between the text and the image:</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/6501873983de015e088440ee/ekHfZJEQxvNDPUHHz2vE3.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/6501873983de015e088440ee/ekHfZJEQxvNDPUHHz2vE3.png"/></a></p>
<p>Conlusion</p>
<p>In the last article, we explored LakeSoul's Data+AI design concept in depth, and in this article we give a few concrete practice cases. In the future, we plan to publish more articles detailing how LakeSoul integrates with leading open source AI frameworks like PyTorch, HuggingFace, DeepSpeed, Ray, and more.
Just as Jobs redefined the mobile phone and the iPhone opened up a whole new world of mobile Internet for users, LakeSoul is redefining the data lake in the era of big models and has successfully integrated the Java big Data ecosystem and the Python AI ecosystem perfectly. With LakeSoul, developers can quickly implement applications from data processing to AI models, easily unlocking the value of multimodal data. We firmly believe that this will open a new chapter for data lake technology in the era of large models!</p>
|
Fine-tune Flair Models on NER Dataset with 🤗 AutoTrain SpaceRunner | https://hf.co/blog/stefan-it/autotrain-flair-mobie | Results Section |
Estimating the Intrinsic Dimension of Protein Sequence Embeddings using ESM-2 | https://hf.co/blog/AmelieSchreiber/intrinsic-dimension-of-proteins | Conclusion and Some Use Cases |
Sparse LLM Inference on CPU | https://hf.co/blog/mwitiderrick/llm-infrerence-on-cpu | Accelerating Compressed Language Models |
Introduction to Dataset Creation - What Makes a Good Dataset? | https://hf.co/blog/acrastt/dataset-creation | Final Words |
Building Your First Kubeflow Pipeline: A Comprehensive Guide | https://hf.co/blog/turhancan97/building-your-first-kubeflow-pipeline | Extending the Basics to Real-World ML Projects |
Predicting Protein-Protein Interactions Using a Protein Language Model and Linear Sum Assignment | https://hf.co/blog/AmelieSchreiber/protein-binding-partners-with-esm2 | Conclusion |
InfiniText: Empowering Conversations & Content with Mistral-7B-Instruct-v0.1 | https://hf.co/blog/Andyrasika/mistral-7b-empowering-conversation | Conclusion |
Changes of Embeddings during Fine-Tuning of Vision Transformers (ViT) | https://hf.co/blog/MarkusStoll/embeddings-during-fine-tuning-of-vision-transform | References |
🕳️ Attention Sinks in LLMs for endless fluency | https://hf.co/blog/tomaarsen/attention-sinks | Citation |
Understanding InstaFlow/Rectified Flow | https://hf.co/blog/Isamu136/insta-rectified-flow | TODO list |
Using 🤗 to Train a GPT-2 Model for Music Generation | https://hf.co/blog/juancopi81/using-hugging-face-to-train-a-gpt-2-model-for-musi | Considering Ethical Implications |
Making AI-Generated Content Easier to Identify | https://hf.co/blog/alicia-truepic/identify-ai-generated-content | Experimenting for the Future |
Samantha and Mistral 7B: A Powerful and Versatile Language Model Duo | https://hf.co/blog/Andyrasika/samantha-and-mistral-7b | Resources: |
IntenLM-20B is officially released on Hugging Face Hub | https://hf.co/blog/internlmassistant/internlm-20b | Using XTuner to Fine-Tune InternLM-20B on a Single 24G GPU |
Trying IDEFICS on a *New Yorker* cartoon dataset | https://hf.co/blog/monsoon-nlp/idefics-newyorker | Future work |
Introducing BlindChat, an open-source and privacy-by-design Conversational AI fully in-browser | https://hf.co/blog/dhuynh95/introducing-blindchat | Next steps |
ESMBind (ESMB) Ensemble Models | https://hf.co/blog/AmelieSchreiber/esmbind-ensemble | Designing a Binder for your Protein with RFDiffusion |
Optimizing Convolutional Neural Networks with Mojo - Part 1 | https://hf.co/blog/rishiraj/optimizing-cnn-with-mojo-1 | Conclusion |
AI Total Cost of Ownership Calculator: Evaluate the cost of in-house AI deployment vs AI APIs | https://hf.co/blog/dhuynh95/ai-tco-calculator | Conclusion |
🤗 LLM suggestions in Argilla with HuggingFace Inference Endpoints | https://hf.co/blog/alvarobartt/argilla-suggestions-via-inference-endpoints | ➡️ Next steps |
Hugging Face and Scrimba partner to teach developers to utilize open-source AI models | https://hf.co/blog/perborgen/hugging-face-and-scrimba-partner-to-teach-develope | Building high-quality AI Engineering courses |
ESMBind (ESMB): Low Rank Adaptation of ESM-2 for Protein Binding Site Prediction | https://hf.co/blog/AmelieSchreiber/esmbind | Next Steps |
Introduction to Quantization cooked in 🤗 with 💗🧑🍳 | https://hf.co/blog/merve/quantization | Useful Resources |