Edit model card
YAML Metadata Error: "widget" must be an array

bart-large-tomasg25/scientific_lay_summarisation

This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container. For more information look at:

Hyperparameters

{
"cache_dir": "opt/ml/input",
"dataset_config_name": "plos",
"dataset_name": "tomasg25/scientific_lay_summarisation",
"do_eval": true,
"do_predict": true,
"do_train": true,
"fp16": true,
"learning_rate": 5e-05,
"model_name_or_path": "facebook/bart-large",
"num_train_epochs": 1,
"output_dir": "/opt/ml/model",
"per_device_eval_batch_size": 4,
"per_device_train_batch_size": 4,
"predict_with_generate": true,
"seed": 7

}

Usage

from transformers import pipeline
summarizer = pipeline("summarization", model="sambydlo/bart-large-tomasg25/scientific_lay_summarisation")
article = "Food production is a major driver of greenhouse gas (GHG) emissions, water and land use, and dietary risk factors are contributors to non-communicable diseases. Shifts in dietary patterns can therefore potentially provide benefits for both the environment and health. However, there is uncertainty about the magnitude of these impacts, and the dietary changes necessary to achieve them. We systematically review the evidence on changes in GHG emissions, land use, and water use, from shifting current dietary intakes to environ- mentally sustainable dietary patterns. We find 14 common sustainable dietary patterns across reviewed studies, with reductions as high as 70–80% of GHG emissions and land use, and 50% of water use (with medians of about 20–30% for these indicators across all studies) possible by adopting sustainable dietary patterns. Reductions in environmental footprints were generally proportional to the magnitude of animal-based food restriction. Dietary shifts also yielded modest benefits in all-cause mortality risk. Our review reveals that environmental and health benefits are possible by shifting current Western diets to a variety of more sustainable dietary patterns."
summarizer(article)

Results

key value
eval_rouge1 41.3889
eval_rouge2 13.3641
eval_rougeL 24.3154
eval_rougeLsum 36.612
test_rouge1 41.4786
test_rouge2 13.3787
test_rougeL 24.1558
test_rougeLsum 36.7723
Downloads last month
161
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train sambydlo/bart-large-scientific-lay-summarisation

Evaluation results

  • Validation ROGUE-1 on tomasg25/scientific_lay_summarisation
    self-reported
    42.621
  • Validation ROGUE-2 on tomasg25/scientific_lay_summarisation
    self-reported
    21.983
  • Validation ROGUE-L on tomasg25/scientific_lay_summarisation
    self-reported
    33.034
  • Test ROGUE-1 on tomasg25/scientific_lay_summarisation
    self-reported
    41.317
  • Test ROGUE-2 on tomasg25/scientific_lay_summarisation
    self-reported
    20.872
  • Test ROGUE-L on tomasg25/scientific_lay_summarisation
    self-reported
    32.134