Datasets:
Tasks:
Token Classification
Modalities:
Text
Sub-tasks:
part-of-speech
Languages:
English
Size:
1K - 10K
License:
id
stringlengths 1
3
| tokens
sequence | pos_tags
sequence |
---|---|---|
0 | [
"Having",
"read",
"many",
"D5000",
"previews",
"I",
"'m",
"worried",
"."
] | [
40,
41,
17,
23,
25,
29,
42,
17,
8
] |
1 | [
"Is",
"Nikon",
"low",
"end",
"moving",
"toward",
"the",
"hellish",
"ergonomics",
"of",
"the",
"Canon",
"Rebel",
"?"
] | [
43,
23,
23,
23,
40,
16,
13,
17,
25,
16,
13,
23,
23,
8
] |
2 | [
"@2a40d3db",
"TWILIGHT",
"is",
"not",
"a",
"good",
"book",
"by",
"any",
"stretch",
"of",
"the",
"imagination",
"."
] | [
50,
23,
43,
31,
13,
17,
22,
16,
13,
22,
16,
13,
22,
8
] |
3 | [
"I",
"have",
"no",
"\"",
"but",
"\"",
"to",
"add",
"to",
"qualify",
"that",
"sentence",
"with",
"."
] | [
29,
42,
13,
1,
22,
1,
36,
38,
36,
38,
13,
22,
16,
8
] |
4 | [
"Really",
"lovely",
"evening",
"spent",
"with",
"people",
"I",
"used",
"to",
"work",
"with",
"."
] | [
31,
17,
22,
41,
16,
25,
29,
39,
36,
38,
16,
8
] |
5 | [
"Take",
"note",
"This",
"Life",
"and",
"Red",
"Dwarf",
",",
"reunions",
"do",
"n't",
"have",
"to",
"be",
"shit",
"."
] | [
38,
22,
23,
23,
11,
23,
23,
7,
25,
42,
31,
38,
36,
38,
17,
8
] |
6 | [
"@6265f032",
"Okay",
",",
"okay",
"."
] | [
50,
37,
7,
37,
8
] |
7 | [
"I",
"just",
"think",
"he",
"looks",
"like",
"a",
"big",
"baby",
",",
"and",
"ppl",
"USED",
"to",
"call",
"him",
"that",
"."
] | [
29,
31,
42,
29,
43,
16,
13,
17,
22,
7,
11,
25,
39,
36,
38,
29,
13,
8
] |
8 | [
"why",
"should",
"the",
"Celtics",
"have",
"to",
"claim",
"that",
"?"
] | [
47,
21,
13,
24,
38,
36,
38,
13,
8
] |
9 | [
"been",
"playing",
"with",
"the",
"new",
"Canon",
"EOS",
"500d",
"and",
"the",
"Nikon",
"D5000",
"over",
"the",
"weekend",
"."
] | [
41,
40,
16,
13,
17,
23,
23,
23,
11,
13,
23,
23,
16,
13,
22,
8
] |
10 | [
"The",
"Canon",
"is",
"definitely",
"the",
"better",
"of",
"the",
"two",
"."
] | [
13,
23,
43,
31,
13,
18,
16,
13,
12,
8
] |
11 | [
"On",
"Fox",
":",
"RNC",
"chair",
"sends",
"letter",
"to",
"GOP",
"calling",
"Obama",
"\"",
"ARROGANT",
"\"",
"#tcot",
"#sgp",
"#hhrs"
] | [
16,
23,
9,
23,
22,
43,
22,
36,
23,
40,
23,
1,
17,
1,
49,
49,
49
] |
12 | [
"Been",
"using",
"Safari",
"4",
"(",
"OSX",
")",
"today",
"and",
"it",
"has",
"n't",
"been",
"going",
"well",
"."
] | [
41,
40,
23,
23,
5,
23,
6,
22,
11,
29,
43,
31,
41,
40,
31,
8
] |
13 | [
"Problematic",
"just",
"like",
"@df820629",
"demonstrated",
"on",
"Diggnation",
"this",
"week",
"."
] | [
17,
31,
16,
50,
39,
16,
23,
13,
22,
8
] |
14 | [
"FF",
">",
"S4"
] | [
23,
35,
23
] |
15 | [
"Still",
"laughing",
"at",
"man",
"utd",
"."
] | [
31,
40,
16,
23,
23,
8
] |
16 | [
"new",
"Red",
"Dwarf",
"character",
"'s",
"accent",
"is",
"already",
"annoying",
"me"
] | [
17,
23,
23,
22,
28,
22,
43,
31,
40,
29
] |
17 | [
"First",
"half",
"of",
"new",
"Red",
"Dwarf",
":",
"Poor",
"."
] | [
17,
22,
16,
17,
23,
23,
9,
17,
8
] |
18 | [
"Does",
"anyone",
"else",
"think",
"Lloyds",
"TSB",
"went",
"under",
"because",
"of",
"the",
"horrible",
"music",
"on",
"their",
"TV",
"adverts",
"?"
] | [
43,
22,
31,
38,
23,
23,
39,
34,
16,
16,
13,
17,
22,
16,
30,
22,
25,
8
] |
19 | [
"NYC",
"promoting",
"LGBT",
"summer",
"tourism",
",",
"hopefully",
"prompting",
"Rush",
"Limbaugh",
"to",
"keep",
"his",
"promise",
"to",
"leave",
"the",
"City"
] | [
23,
40,
23,
22,
22,
7,
31,
40,
23,
23,
36,
38,
30,
22,
36,
38,
13,
23
] |
20 | [
"Breaking",
"news",
":",
"admin",
"official",
"says",
"Chrysler",
"will",
"file",
"for",
"Chapter",
"11",
"bankruptcy",
"."
] | [
17,
22,
9,
22,
22,
43,
23,
21,
38,
16,
23,
23,
22,
8
] |
21 | [
"Statement",
"from",
"the",
"President",
"at",
"noon"
] | [
22,
16,
13,
23,
16,
22
] |
22 | [
"(",
"With",
"audio",
")",
"A",
"REPUBLICAN",
"caller",
"tells",
"off",
"Rush",
"Limbaugh",
":",
"\"",
"You",
"'re",
"a",
"brainwashed",
"Nazi",
"!",
"\""
] | [
5,
16,
22,
6,
13,
17,
22,
43,
34,
23,
23,
9,
1,
29,
42,
13,
17,
22,
8,
1
] |
23 | [
"ok",
",",
"let",
"'s",
"be",
"honest",
",",
"is",
"the",
"iphone",
"really",
"that",
"great",
"?"
] | [
37,
7,
38,
29,
38,
17,
7,
43,
13,
23,
31,
31,
17,
8
] |
24 | [
"I",
"think",
"I",
"'m",
"going",
"to",
"wait",
"for",
"the",
"Palm",
"Pre",
"."
] | [
29,
42,
29,
42,
40,
36,
38,
16,
13,
23,
23,
8
] |
25 | [
"Obama",
"Condemns",
"North",
"Korea",
"Launch",
",",
"Calls",
"for",
"Nuclear",
"Free",
"World",
"-",
"voice",
"of",
"America",
":"
] | [
23,
43,
23,
23,
22,
7,
43,
16,
22,
17,
22,
9,
23,
16,
23,
9
] |
26 | [
"abc",
"news",
"obama",
"condemns",
"north",
"..."
] | [
23,
23,
23,
43,
23,
9
] |
27 | [
"Oh",
"geez",
",",
"another",
"kiddie",
"eddie",
"murphy",
"movie",
"."
] | [
37,
37,
7,
13,
22,
23,
23,
22,
8
] |
28 | [
"A",
"movie",
"about",
"a",
"kid",
"predicting",
"stocks",
"...",
"she",
"has",
"to",
"be",
"better",
"than",
"CNBC",
"."
] | [
13,
22,
16,
13,
22,
40,
25,
9,
29,
43,
36,
38,
18,
16,
23,
8
] |
29 | [
"Reading",
"about",
"North",
"Korea",
"."
] | [
40,
16,
23,
23,
8
] |
30 | [
"Grim",
"place",
"!"
] | [
17,
22,
8
] |
31 | [
"Citizens",
"catechised",
"to",
"pledge",
"fealty",
"to",
"Kim",
"Jong-Il",
"from",
"birth",
"."
] | [
25,
41,
36,
38,
22,
36,
23,
23,
16,
22,
8
] |
32 | [
"No",
"broadband",
"and",
"mobile",
"phones",
"banned"
] | [
13,
22,
11,
17,
25,
41
] |
33 | [
"DM",
":",
"You",
"may",
"forgive",
"him",
",",
"Rihanna",
",",
"but",
"battered",
"women",
"wo",
"n't",
"."
] | [
22,
9,
29,
21,
38,
29,
7,
23,
7,
11,
41,
25,
21,
31,
8
] |
34 | [
"Gordon",
"Brown",
"refuses",
"to",
"hand",
"back",
"$",
"3m",
"pension"
] | [
23,
23,
43,
36,
38,
31,
4,
12,
22
] |
35 | [
"Freddie",
"Garcia",
"sitnks",
"."
] | [
23,
23,
43,
8
] |
36 | [
"I",
"think",
"everyone",
"knew",
"that",
"excepet",
"the",
"Mets",
"."
] | [
29,
42,
22,
39,
13,
16,
13,
24,
8
] |
37 | [
"RT",
"@96adf9a3",
"@0fa4a819",
":",
"oracle/IOUG",
"survey",
":",
"11",
"%",
"of",
"surveyed",
"DBAs",
"have",
"NEVER",
"applied",
"critical",
"patches",
"."
] | [
48,
50,
50,
9,
23,
22,
9,
12,
3,
16,
41,
25,
42,
31,
41,
17,
25,
8
] |
38 | [
"@33128d71",
"As",
"we",
"say",
"in",
"England",
"(",
"Bono",
"'s",
"not",
"much",
"liked",
"here",
")",
"."
] | [
50,
16,
29,
42,
16,
23,
5,
23,
43,
31,
31,
41,
31,
6,
8
] |
39 | [
"Bono",
"is",
"NOT",
"god",
"!"
] | [
23,
43,
31,
23,
8
] |
40 | [
"Props",
"to",
"Jamba",
"Juice",
"."
] | [
25,
36,
23,
23,
8
] |
41 | [
"@c6d81112",
"I",
"agree",
"."
] | [
50,
29,
42,
8
] |
42 | [
"I",
"do",
"n't",
"see",
"how",
"Leno",
"at",
"10pm",
"would",
"be",
"the",
"least",
"bit",
"profitable",
"for",
"NBC",
"."
] | [
29,
42,
31,
38,
47,
23,
16,
22,
21,
38,
13,
19,
22,
17,
16,
23,
8
] |
43 | [
"Cheap",
"is",
"all",
"that",
"it",
"'s",
"got",
"going",
"for",
"it",
"."
] | [
17,
43,
13,
16,
29,
43,
39,
40,
16,
29,
8
] |
44 | [
"Sky",
"News",
"URL",
"I",
"just",
"quoted",
"was",
"301characters",
"long",
"!",
"!",
"!"
] | [
23,
23,
22,
29,
31,
39,
39,
25,
17,
8,
8,
8
] |
45 | [
"Long",
"live",
"TinyURL",
"!"
] | [
31,
38,
23,
8
] |
46 | [
"Sky",
"News",
"ticker",
"bar",
"says",
"insolvencies",
"rose",
"BY",
"29,774",
"."
] | [
23,
23,
22,
22,
43,
25,
39,
16,
12,
8
] |
47 | [
"Sky",
"News",
"website",
"says",
"rose",
"TO",
"29,774",
"."
] | [
23,
23,
22,
43,
39,
36,
12,
8
] |
48 | [
"these",
"new",
"episodes",
"of",
"red",
"dwarf",
"are",
"abit",
"strange",
"you",
"know",
"."
] | [
13,
17,
25,
16,
23,
23,
42,
31,
17,
29,
42,
8
] |
49 | [
"Fuck",
"the",
"mets",
"and",
"their",
"$",
"100",
"tickets",
"."
] | [
38,
13,
24,
11,
30,
4,
12,
25,
8
] |
50 | [
"Rush",
"Limbaugh",
"is",
"not",
"panel",
"material",
"."
] | [
23,
23,
43,
31,
22,
22,
8
] |
51 | [
"Did",
"you",
"see",
"his",
"interview",
"with",
"Barbara",
"Walters",
"?"
] | [
39,
29,
38,
30,
22,
16,
23,
23,
8
] |
52 | [
"My",
"Final",
"Decision",
":",
"I",
"'m",
"reinstalling",
"Safari",
"3",
",",
"Safari",
"4",
"is",
"so",
"bad",
",",
"I",
"'m",
"not",
"even",
"going",
"to",
"do",
"a",
"review",
"on",
"it",
",",
"that",
"'s",
"how",
"bad",
"it",
"is",
"."
] | [
30,
17,
22,
9,
29,
42,
40,
23,
23,
7,
23,
23,
43,
31,
17,
7,
29,
42,
31,
31,
40,
36,
38,
13,
22,
16,
29,
7,
13,
43,
47,
17,
29,
43,
8
] |
53 | [
"is",
"n't",
"conviced",
"susan",
"boyle",
"is",
"as",
"great",
"as",
"everyone",
"'s",
"making",
"her",
"out",
"to",
"be",
"(",
"sorry",
"if",
"that",
"sounds",
"bithcy",
")"
] | [
43,
31,
41,
23,
23,
43,
31,
17,
16,
22,
43,
40,
29,
34,
36,
38,
5,
17,
16,
13,
43,
17,
6
] |
54 | [
"http://c4d96274.com",
"-",
"is",
"a",
"list",
"of",
"the",
"sites",
"that",
"the",
"ACMA",
"wants",
"to",
"band",
"and",
"turn",
"Australia",
"into",
"a",
"country",
"where",
"the",
"net",
"is",
"filtered"
] | [
51,
9,
43,
13,
22,
16,
13,
25,
16,
13,
23,
43,
36,
38,
11,
38,
23,
16,
13,
22,
47,
13,
23,
43,
41
] |
55 | [
"Amid",
"AIG",
"Furor",
",",
"Dodd",
"Tries",
"to",
"Undo",
"Bonus",
"Protections",
"in",
"the",
"\"",
"Dodd",
"Amendment",
"\"",
"rules",
"dodd",
"needs",
"to",
"return",
"his",
"bonuses",
"from",
"aig",
"first",
"#tcot"
] | [
16,
23,
22,
7,
23,
43,
36,
38,
22,
25,
16,
13,
1,
23,
23,
1,
25,
23,
43,
36,
38,
30,
25,
16,
23,
31,
49
] |
56 | [
"I",
"'m",
"gald",
"that",
"the",
"Dallas",
"Cowboys",
"dumped",
"that",
"whinner",
"Terrell",
"Owens",
"."
] | [
29,
42,
17,
16,
13,
23,
25,
39,
13,
22,
23,
23,
8
] |
57 | [
"Please",
"note",
":",
"it",
"'s",
"WHINNER",
",",
"not",
"WINNER",
"."
] | [
37,
38,
9,
29,
43,
22,
7,
31,
22,
8
] |
58 | [
"@81847bb5",
"I",
"have",
"science",
"on",
"my",
"side",
"http://c13bb75a.com",
"http://b63c87cc.com",
"you",
"have",
"Rush",
"Limbaugh",
"and",
"empty",
"rhetoric",
"."
] | [
50,
29,
42,
22,
16,
30,
22,
51,
51,
29,
42,
23,
23,
11,
17,
22,
8
] |
59 | [
"Obama",
"puts",
"GM",
",",
"Chrysler",
"on",
"short",
"leash",
"http://cda9f9ae.com"
] | [
23,
43,
23,
7,
23,
16,
17,
22,
51
] |
60 | [
"Friday",
"13th",
"was",
"a",
"good",
"day",
"for",
"music",
"-",
"if",
"you",
"discount",
"the",
"Bono",
"element",
"of",
"course",
"-",
"PM",
"Dawn",
"AND",
"Dubb",
"be",
"good",
"to",
"me",
"on",
"Radio",
"One",
",",
"blinding",
"!"
] | [
23,
17,
39,
13,
17,
22,
16,
22,
9,
16,
29,
42,
13,
23,
22,
16,
22,
9,
23,
23,
11,
23,
38,
17,
36,
29,
16,
23,
23,
7,
37,
8
] |
61 | [
"ACMA",
"doing",
"it",
"'s",
"part",
"to",
"kill",
"of",
"tv",
"."
] | [
23,
40,
29,
28,
22,
36,
38,
34,
22,
8
] |
62 | [
"http://e9bc3d9f.com",
"we",
"liked",
"Underbelly",
"Uncut",
"."
] | [
51,
29,
39,
23,
23,
8
] |
63 | [
"Does",
"anyone",
"really",
"think",
"a",
"big-screen",
"Kindle",
"can",
"save",
"newspapers",
"?",
":"
] | [
43,
22,
31,
38,
13,
17,
23,
21,
38,
25,
8,
9
] |
64 | [
"Here",
"comes",
"another",
"possible",
"savior",
"for",
"the",
"de",
"..."
] | [
31,
43,
13,
17,
22,
16,
13,
23,
9
] |
65 | [
"trying",
"to",
"figure",
"out",
"why",
"rush",
"limbaugh",
"'s",
"head",
"keeps",
"getting",
"bigger",
",",
"it",
"'s",
"usually",
"just",
"his",
"nose",
"."
] | [
40,
36,
38,
34,
47,
23,
23,
28,
22,
43,
40,
18,
7,
29,
43,
31,
31,
30,
22,
8
] |
66 | [
"LOL",
"!"
] | [
37,
8
] |
67 | [
"North",
"Korea",
"will",
"never",
"be",
"able",
"to",
"live",
"this",
"one",
"down",
"!"
] | [
23,
23,
21,
31,
38,
17,
36,
38,
13,
22,
34,
8
] |
68 | [
"All",
"their",
"threats",
"!"
] | [
27,
30,
25,
8
] |
69 | [
"result",
":",
"a",
"DUD",
"that",
"they",
"spent",
"mucho",
"$",
"$",
"$",
"on",
"...",
"could",
"have",
"fed",
"kids"
] | [
22,
9,
13,
22,
16,
29,
39,
17,
4,
4,
4,
16,
9,
38,
41,
41,
25
] |
70 | [
"grrr",
"...",
"spotify",
"in",
"some",
"kind",
"of",
"infinite",
"update",
"loop"
] | [
37,
9,
23,
16,
13,
22,
16,
17,
22,
22
] |
71 | [
"73",
"Executives",
"got",
"millions",
"at",
"AIG",
"and",
"11",
"of",
"them",
"do",
"n't",
"even",
"work",
"there",
"any",
"more",
"!"
] | [
12,
25,
39,
25,
16,
23,
11,
12,
16,
29,
42,
31,
31,
38,
31,
31,
32,
8
] |
72 | [
"This",
"is",
"RIDICULOUS",
"!",
"!"
] | [
13,
43,
17,
8,
8
] |
73 | [
"BACK",
"TO",
"NAIL",
"BUSINESS",
"..."
] | [
31,
36,
22,
22,
9
] |
74 | [
"Thanks",
"to",
"#kindle",
"we",
"may",
"never",
"know",
"what",
"an",
"old",
"book",
"smells",
"like",
"!"
] | [
25,
36,
49,
29,
21,
31,
38,
45,
13,
17,
22,
43,
16,
8
] |
75 | [
"Call",
"me",
"crazy",
"but",
"I",
"love",
"the",
"smell",
"of",
"old",
"books",
"f",
"..."
] | [
38,
29,
17,
11,
29,
42,
13,
22,
16,
17,
25,
35,
9
] |
76 | [
"RT",
"@833134e3",
":",
"\"",
"Look",
"for",
"Oracle",
"to",
"do",
"a",
"little",
"pruning",
"before",
"it",
"blows",
"any",
"dough",
"on",
"Sun",
"'s",
"hardware",
"business",
",",
"\"",
"http://cfb62adc.com"
] | [
48,
50,
9,
1,
38,
16,
23,
36,
38,
13,
17,
22,
16,
29,
43,
13,
22,
16,
23,
28,
22,
22,
7,
1,
51
] |
77 | [
"Now",
"playing",
":",
"Anger",
"over",
"possible",
"bonus",
"for",
"Lloyds",
"Banking",
"group",
"staff",
"http://a0eead7b.com"
] | [
31,
40,
9,
22,
16,
17,
22,
16,
23,
23,
23,
22,
51
] |
78 | [
"#mets",
"are",
"unmotivated",
"pathetic",
"losers",
"."
] | [
49,
42,
41,
17,
25,
8
] |
79 | [
"Should",
"be",
"sellers",
"this",
"year",
"-",
"parts",
"r",
"worth",
"far",
"more",
"than",
"the",
"whole",
"."
] | [
21,
38,
25,
13,
22,
9,
25,
42,
17,
31,
18,
16,
13,
22,
8
] |
80 | [
"Go",
"with",
"youth",
"-",
"enough",
"already"
] | [
38,
16,
22,
9,
31,
31
] |
81 | [
"Just",
"told",
"Lloyds",
"to",
"go",
"do",
"one",
"their",
"new",
"lending",
"is",
"3.5",
"%",
"over",
"base",
"rate",
"on",
"well",
"secured",
"loans",
"!"
] | [
31,
41,
23,
36,
38,
38,
22,
30,
17,
22,
43,
12,
3,
16,
22,
22,
16,
31,
41,
25,
8
] |
82 | [
"What",
"sort",
"of",
"rip",
"off",
"lending",
"policy",
"is",
"that",
"?",
"?",
"?"
] | [
45,
22,
16,
22,
34,
22,
22,
43,
13,
8,
8,
8
] |
83 | [
"RP",
"'s",
"response",
"to",
"Obama",
",",
"JUST",
"up",
":",
"Barack",
"Obama",
"is",
"\"",
"preaching",
"inflation",
"...",
"economic",
"fascism",
"\"",
"http://5eb65577.com",
"#tlot",
"..."
] | [
23,
28,
22,
36,
23,
7,
31,
31,
9,
23,
23,
43,
1,
40,
22,
9,
17,
22,
1,
51,
49,
9
] |
84 | [
"Obama",
"notes",
"earlier",
"today",
"to",
"ABC",
"that",
"he",
"is",
"\"",
"unaware",
"of",
"tea",
"parties",
".",
"\""
] | [
23,
43,
32,
22,
36,
23,
16,
29,
43,
1,
17,
16,
22,
25,
8,
1
] |
85 | [
"Arrogance",
"we",
"can",
"believe",
"in",
"."
] | [
22,
29,
21,
38,
16,
8
] |
86 | [
"RT",
"@ead6412a",
"Sky",
"News",
"creates",
"a",
"Twitter",
"corresp",
"."
] | [
48,
50,
23,
23,
43,
13,
23,
22,
8
] |
87 | [
"What",
"a",
"waste",
"."
] | [
45,
13,
22,
8
] |
88 | [
"Should",
"be",
"rolling",
"that",
"into",
"regular",
"work",
"rather",
"th",
"..."
] | [
21,
38,
40,
13,
16,
17,
22,
31,
31,
9
] |
89 | [
"Daaaamn",
",",
"Jay",
"Leno",
"'s",
"annoying",
"."
] | [
37,
7,
23,
23,
43,
17,
8
] |
90 | [
"Our",
"Hater",
"Of",
"The",
"Day",
":",
"Rush",
"Limbaugh",
"http://7802436c.com"
] | [
30,
22,
16,
13,
22,
9,
23,
23,
51
] |
91 | [
"@4be879e1",
"first",
"line",
"from",
"ABC",
"news",
":",
"North",
"Korea",
"defiantly",
"launched",
"..."
] | [
50,
17,
22,
16,
23,
23,
9,
23,
23,
31,
39,
9
] |
92 | [
"Adding",
"the",
"word",
"defiantly",
"perpetrates",
"that",
"is",
"was",
"wrong",
"."
] | [
40,
13,
22,
31,
43,
16,
29,
39,
17,
8
] |
93 | [
"A",
"USA",
"tactic"
] | [
13,
23,
22
] |
94 | [
"Lloyds",
"bonuses",
"should",
"be",
"taxed",
"99",
"p",
"in",
"the",
"pound",
"."
] | [
23,
25,
21,
38,
41,
12,
22,
16,
13,
22,
8
] |
95 | [
"Just",
"watched",
"red",
"dwarf",
"."
] | [
31,
39,
23,
23,
8
] |
96 | [
"Total",
"crap",
"start",
"to",
"finish",
"!",
"!"
] | [
17,
22,
22,
36,
22,
8,
8
] |
97 | [
"@35facf81",
"whoo",
"!"
] | [
50,
37,
8
] |
98 | [
"i",
"'m",
"ready",
"!"
] | [
29,
42,
17,
8
] |
99 | [
"New",
"blog",
"post",
":",
"Bono",
"lashes",
"out",
"http://8a00c189.com"
] | [
17,
22,
22,
9,
23,
43,
34,
51
] |
End of preview. Expand
in Dataset Viewer.
Dataset Card for "twitter-pos"
Dataset Summary
Part-of-speech information is basic NLP task. However, Twitter text is difficult to part-of-speech tag: it is noisy, with linguistic errors and idiosyncratic style. This dataset contains two datasets for English PoS tagging for tweets:
- Ritter, with train/dev/test
- Foster, with dev/test
Splits defined in the Derczynski paper, but the data is from Ritter and Foster.
- Ritter: https://aclanthology.org/D11-1141.pdf,
- Foster: https://www.aaai.org/ocs/index.php/ws/aaaiw11/paper/download/3912/4191
Supported Tasks and Leaderboards
Languages
English, non-region-specific. bcp47:en
Dataset Structure
Data Instances
An example of 'train' looks as follows.
{'id': '0', 'tokens': ['Antick', 'Musings', 'post', ':', 'Book-A-Day', '2010', '#', '243', '(', '10/4', ')', '--', 'Gray', 'Horses', 'by', 'Hope', 'Larson', 'http://bit.ly/as8fvc'], 'pos_tags': [23, 23, 22, 9, 23, 12, 22, 12, 5, 12, 6, 9, 23, 23, 16, 23, 23, 51]}
Data Fields
The data fields are the same among all splits.
twitter-pos
id
: astring
feature.tokens
: alist
ofstring
features.pos_tags
: alist
of classification labels (int
). Full tagset with indices:
Data Splits
name | tokens | sentences |
---|---|---|
ritter train | 10652 | 551 |
ritter dev | 2242 | 118 |
ritter test | 2291 | 118 |
foster dev | 2998 | 270 |
foster test | 2841 | 250 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
@inproceedings{ritter2011named,
title={Named entity recognition in tweets: an experimental study},
author={Ritter, Alan and Clark, Sam and Etzioni, Oren and others},
booktitle={Proceedings of the 2011 conference on empirical methods in natural language processing},
pages={1524--1534},
year={2011}
}
@inproceedings{foster2011hardtoparse,
title={\# hardtoparse: POS Tagging and Parsing the Twitterverse},
author={Foster, Jennifer and Cetinoglu, Ozlem and Wagner, Joachim and Le Roux, Joseph and Hogan, Stephen and Nivre, Joakim and Hogan, Deirdre and Van Genabith, Josef},
booktitle={Workshops at the Twenty-Fifth AAAI Conference on Artificial Intelligence},
year={2011}
}
@inproceedings{derczynski2013twitter,
title={Twitter part-of-speech tagging for all: Overcoming sparse and noisy data},
author={Derczynski, Leon and Ritter, Alan and Clark, Sam and Bontcheva, Kalina},
booktitle={Proceedings of the international conference recent advances in natural language processing ranlp 2013},
pages={198--206},
year={2013}
}
Contributions
Author uploaded (@leondz)
- Downloads last month
- 67