question_id
int64 59.8M
70.5M
| question_title
stringlengths 17
135
| question_body
stringlengths 274
3.35k
| accepted_answer_id
int64 59.8M
70.5M
| question_creation_date
timestamp[us] | question_answer_count
int64 1
4
| question_favorite_count
float64 0
3
⌀ | question_score
int64 -2
5
| question_view_count
int64 20
1.17k
| tags
stringclasses 2
values | answer_body
stringlengths 65
4.03k
| answer_creation_date
timestamp[us] | answer_score
int64 0
5
| link
stringlengths 49
87
| context
stringlengths 1.11k
251k
| answer_start
int64 0
1.25k
| answer_end
int64 187
3.53k
| question
stringlengths 263
3.22k
| predicted_answer
stringclasses 24
values | parsed_answer
stringlengths 41
3.53k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
63,701,878 | Convert series from pandas DataFrame to string | <p>For my dataframe</p>
<pre><code>df = pd.DataFrame({
cat: ['a','a','a','b','b','b'],
step: [1,3,2, 2,1,3],
Id: [101,103,102, 902,901,903] })
</code></pre>
<p>I need to get ID values as string on output using STEP values as ordering clause:</p>
<pre><code>cat_a: '101,102,103'
cat_b: '901,902,903'
</code></pre>
<p>I try this with heavy construction. Is there any elegant solution instead?</p>
<pre><code>dfa = df.loc[df['cat'] == 'a', ['step', 'id']]
dfa.set_index('step')
a1=dfa[dfa.index == 1].iloc[0][0]
a2=dfa[dfa.index == 2].iloc[0][0]
a3=dfa[dfa.index == 3].iloc[0][0]
cat_a = '{}, {}, {}'.format(a1,a2,a3)
…
cat_b = '{}, {}, {}'.format(b1,b2,b3)
</code></pre> | 63,701,919 | 2020-09-02T08:45:06.357000 | 1 | null | 1 | 37 | python|pandas | <p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>DataFrame.sort_values</code></a> by both columns first for expected order and then aggregate <code>join</code> with lambda method with convert to <code>string</code>s:</p>
<pre><code>d = (df.sort_values(['cat','step'])
.groupby('cat')['Id']
.agg(lambda x: ','.join(x.astype(str)))
.to_dict())
print (d)
{'a': '101,102,103', 'b': '901,902,903'}
</code></pre> | 2020-09-02T08:47:22.353000 | 0 | https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_string.html | pandas.DataFrame.to_string#
pandas.DataFrame.to_string#
DataFrame.to_string(buf=None, columns=None, col_space=None, header=True, index=True, na_rep='NaN', formatters=None, float_format=None, sparsify=None, index_names=True, justify=None, max_rows=None, max_cols=None, show_dimensions=False, decimal='.', line_width=None, min_rows=None, max_colwidth=None, encoding=None)[source]#
Use DataFrame.sort_values by both columns first for expected order and then aggregate join with lambda method with convert to strings:
d = (df.sort_values(['cat','step'])
.groupby('cat')['Id']
.agg(lambda x: ','.join(x.astype(str)))
.to_dict())
print (d)
{'a': '101,102,103', 'b': '901,902,903'}
Render a DataFrame to a console-friendly tabular output.
Parameters
bufstr, Path or StringIO-like, optional, default NoneBuffer to write to. If None, the output is returned as a string.
columnssequence, optional, default NoneThe subset of columns to write. Writes all columns by default.
col_spaceint, list or dict of int, optionalThe minimum width of each column. If a list of ints is given every integers corresponds with one column. If a dict is given, the key references the column, while the value defines the space to use..
headerbool or sequence of str, optionalWrite out the column names. If a list of strings is given, it is assumed to be aliases for the column names.
indexbool, optional, default TrueWhether to print index (row) labels.
na_repstr, optional, default ‘NaN’String representation of NaN to use.
formatterslist, tuple or dict of one-param. functions, optionalFormatter functions to apply to columns’ elements by position or
name.
The result of each function must be a unicode string.
List/tuple must be of length equal to the number of columns.
float_formatone-parameter function, optional, default NoneFormatter function to apply to columns’ elements if they are
floats. This function must return a unicode string and will be
applied only to the non-NaN elements, with NaN being
handled by na_rep.
Changed in version 1.2.0.
sparsifybool, optional, default TrueSet to False for a DataFrame with a hierarchical index to print
every multiindex key at each row.
index_namesbool, optional, default TruePrints the names of the indexes.
justifystr, default NoneHow to justify the column labels. If None uses the option from
the print configuration (controlled by set_option), ‘right’ out
of the box. Valid values are
left
right
center
justify
justify-all
start
end
inherit
match-parent
initial
unset.
max_rowsint, optionalMaximum number of rows to display in the console.
max_colsint, optionalMaximum number of columns to display in the console.
show_dimensionsbool, default FalseDisplay DataFrame dimensions (number of rows by number of columns).
decimalstr, default ‘.’Character recognized as decimal separator, e.g. ‘,’ in Europe.
line_widthint, optionalWidth to wrap a line in characters.
min_rowsint, optionalThe number of rows to display in the console in a truncated repr
(when number of rows is above max_rows).
max_colwidthint, optionalMax width to truncate each column in characters. By default, no limit.
New in version 1.0.0.
encodingstr, default “utf-8”Set character encoding.
New in version 1.0.
Returns
str or NoneIf buf is None, returns the result as a string. Otherwise returns
None.
See also
to_htmlConvert DataFrame to HTML.
Examples
>>> d = {'col1': [1, 2, 3], 'col2': [4, 5, 6]}
>>> df = pd.DataFrame(d)
>>> print(df.to_string())
col1 col2
0 1 4
1 2 5
2 3 6
| 383 | 697 | Convert series from pandas DataFrame to string
For my dataframe
df = pd.DataFrame({
cat: ['a','a','a','b','b','b'],
step: [1,3,2, 2,1,3],
Id: [101,103,102, 902,901,903] })
I need to get ID values as string on output using STEP values as ordering clause:
cat_a: '101,102,103'
cat_b: '901,902,903'
I try this with heavy construction. Is there any elegant solution instead?
dfa = df.loc[df['cat'] == 'a', ['step', 'id']]
dfa.set_index('step')
a1=dfa[dfa.index == 1].iloc[0][0]
a2=dfa[dfa.index == 2].iloc[0][0]
a3=dfa[dfa.index == 3].iloc[0][0]
cat_a = '{}, {}, {}'.format(a1,a2,a3)
…
cat_b = '{}, {}, {}'.format(b1,b2,b3)
| / | Use DataFrame.sort_values by both columns first for expected order and then aggregate join with lambda method with convert to strings:
d = (df.sort_values(['cat','step'])
.groupby('cat')['Id']
.agg(lambda x: ','.join(x.astype(str)))
.to_dict())
print (d)
{'a': '101,102,103', 'b': '901,902,903'}
|
67,914,151 | Filtering only 1 column in a df without returning the entire DF in 1 line | <p>I'm hoping that there is a way i can return a series from df while im filtering it in 1 line.
Is there a way I could return a column from my df after I filter it?
Currently my process is something like this</p>
<pre><code>df = df[df['a'] > 0 ]
list = df['a']
</code></pre> | 67,915,627 | 2021-06-10T03:10:52.427000 | 1 | null | 1 | 41 | python|pandas | <p>The <code>df.loc</code> syntax is the preferred way to do this, as @JohnM wrote in his comment, though I find the syntax from @Don'tAccept more readable and scaleable however since it can handle cases like column names with spaces in them. These combine like:</p>
<pre><code>df.loc[df['a'] > 0, 'a']
</code></pre>
<p>Note this is expandable to provide multiple columns, for example if you wanted columns 'a' and 'b' you would do:</p>
<pre><code>df.loc[df['a'] > 0, ['a', 'b']]
</code></pre>
<p>Lastly, you can verify that <code>df.a</code> and <code>df['a']</code> are the same by checking</p>
<pre><code>in: df.a is df['a']
out: True
</code></pre>
<p>The <code>is</code> here (as opposed to <code>==</code>) means <code>df.a</code> and <code>df['a']</code> point to the same object in memory, so they are interchangeable.</p> | 2021-06-10T06:08:47.610000 | 0 | https://pandas.pydata.org/docs/user_guide/groupby.html | Group by: split-apply-combine#
The df.loc syntax is the preferred way to do this, as @JohnM wrote in his comment, though I find the syntax from @Don'tAccept more readable and scaleable however since it can handle cases like column names with spaces in them. These combine like:
df.loc[df['a'] > 0, 'a']
Note this is expandable to provide multiple columns, for example if you wanted columns 'a' and 'b' you would do:
df.loc[df['a'] > 0, ['a', 'b']]
Lastly, you can verify that df.a and df['a'] are the same by checking
in: df.a is df['a']
out: True
The is here (as opposed to ==) means df.a and df['a'] point to the same object in memory, so they are interchangeable.
Group by: split-apply-combine#
By “group by” we are referring to a process involving one or more of the following
steps:
Splitting the data into groups based on some criteria.
Applying a function to each group independently.
Combining the results into a data structure.
Out of these, the split step is the most straightforward. In fact, in many
situations we may wish to split the data set into groups and do something with
those groups. In the apply step, we might wish to do one of the
following:
Aggregation: compute a summary statistic (or statistics) for each
group. Some examples:
Compute group sums or means.
Compute group sizes / counts.
Transformation: perform some group-specific computations and return a
like-indexed object. Some examples:
Standardize data (zscore) within a group.
Filling NAs within groups with a value derived from each group.
Filtration: discard some groups, according to a group-wise computation
that evaluates True or False. Some examples:
Discard data that belongs to groups with only a few members.
Filter out data based on the group sum or mean.
Some combination of the above: GroupBy will examine the results of the apply
step and try to return a sensibly combined result if it doesn’t fit into
either of the above two categories.
Since the set of object instance methods on pandas data structures are generally
rich and expressive, we often simply want to invoke, say, a DataFrame function
on each group. The name GroupBy should be quite familiar to those who have used
a SQL-based tool (or itertools), in which you can write code like:
SELECT Column1, Column2, mean(Column3), sum(Column4)
FROM SomeTable
GROUP BY Column1, Column2
We aim to make operations like this natural and easy to express using
pandas. We’ll address each area of GroupBy functionality then provide some
non-trivial examples / use cases.
See the cookbook for some advanced strategies.
Splitting an object into groups#
pandas objects can be split on any of their axes. The abstract definition of
grouping is to provide a mapping of labels to group names. To create a GroupBy
object (more on what the GroupBy object is later), you may do the following:
In [1]: df = pd.DataFrame(
...: [
...: ("bird", "Falconiformes", 389.0),
...: ("bird", "Psittaciformes", 24.0),
...: ("mammal", "Carnivora", 80.2),
...: ("mammal", "Primates", np.nan),
...: ("mammal", "Carnivora", 58),
...: ],
...: index=["falcon", "parrot", "lion", "monkey", "leopard"],
...: columns=("class", "order", "max_speed"),
...: )
...:
In [2]: df
Out[2]:
class order max_speed
falcon bird Falconiformes 389.0
parrot bird Psittaciformes 24.0
lion mammal Carnivora 80.2
monkey mammal Primates NaN
leopard mammal Carnivora 58.0
# default is axis=0
In [3]: grouped = df.groupby("class")
In [4]: grouped = df.groupby("order", axis="columns")
In [5]: grouped = df.groupby(["class", "order"])
The mapping can be specified many different ways:
A Python function, to be called on each of the axis labels.
A list or NumPy array of the same length as the selected axis.
A dict or Series, providing a label -> group name mapping.
For DataFrame objects, a string indicating either a column name or
an index level name to be used to group.
df.groupby('A') is just syntactic sugar for df.groupby(df['A']).
A list of any of the above things.
Collectively we refer to the grouping objects as the keys. For example,
consider the following DataFrame:
Note
A string passed to groupby may refer to either a column or an index level.
If a string matches both a column name and an index level name, a
ValueError will be raised.
In [6]: df = pd.DataFrame(
...: {
...: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
...: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
...: "C": np.random.randn(8),
...: "D": np.random.randn(8),
...: }
...: )
...:
In [7]: df
Out[7]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
On a DataFrame, we obtain a GroupBy object by calling groupby().
We could naturally group by either the A or B columns, or both:
In [8]: grouped = df.groupby("A")
In [9]: grouped = df.groupby(["A", "B"])
If we also have a MultiIndex on columns A and B, we can group by all
but the specified columns
In [10]: df2 = df.set_index(["A", "B"])
In [11]: grouped = df2.groupby(level=df2.index.names.difference(["B"]))
In [12]: grouped.sum()
Out[12]:
C D
A
bar -1.591710 -1.739537
foo -0.752861 -1.402938
These will split the DataFrame on its index (rows). We could also split by the
columns:
In [13]: def get_letter_type(letter):
....: if letter.lower() in 'aeiou':
....: return 'vowel'
....: else:
....: return 'consonant'
....:
In [14]: grouped = df.groupby(get_letter_type, axis=1)
pandas Index objects support duplicate values. If a
non-unique index is used as the group key in a groupby operation, all values
for the same index value will be considered to be in one group and thus the
output of aggregation functions will only contain unique index values:
In [15]: lst = [1, 2, 3, 1, 2, 3]
In [16]: s = pd.Series([1, 2, 3, 10, 20, 30], lst)
In [17]: grouped = s.groupby(level=0)
In [18]: grouped.first()
Out[18]:
1 1
2 2
3 3
dtype: int64
In [19]: grouped.last()
Out[19]:
1 10
2 20
3 30
dtype: int64
In [20]: grouped.sum()
Out[20]:
1 11
2 22
3 33
dtype: int64
Note that no splitting occurs until it’s needed. Creating the GroupBy object
only verifies that you’ve passed a valid mapping.
Note
Many kinds of complicated data manipulations can be expressed in terms of
GroupBy operations (though can’t be guaranteed to be the most
efficient). You can get quite creative with the label mapping functions.
GroupBy sorting#
By default the group keys are sorted during the groupby operation. You may however pass sort=False for potential speedups:
In [21]: df2 = pd.DataFrame({"X": ["B", "B", "A", "A"], "Y": [1, 2, 3, 4]})
In [22]: df2.groupby(["X"]).sum()
Out[22]:
Y
X
A 7
B 3
In [23]: df2.groupby(["X"], sort=False).sum()
Out[23]:
Y
X
B 3
A 7
Note that groupby will preserve the order in which observations are sorted within each group.
For example, the groups created by groupby() below are in the order they appeared in the original DataFrame:
In [24]: df3 = pd.DataFrame({"X": ["A", "B", "A", "B"], "Y": [1, 4, 3, 2]})
In [25]: df3.groupby(["X"]).get_group("A")
Out[25]:
X Y
0 A 1
2 A 3
In [26]: df3.groupby(["X"]).get_group("B")
Out[26]:
X Y
1 B 4
3 B 2
New in version 1.1.0.
GroupBy dropna#
By default NA values are excluded from group keys during the groupby operation. However,
in case you want to include NA values in group keys, you could pass dropna=False to achieve it.
In [27]: df_list = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
In [28]: df_dropna = pd.DataFrame(df_list, columns=["a", "b", "c"])
In [29]: df_dropna
Out[29]:
a b c
0 1 2.0 3
1 1 NaN 4
2 2 1.0 3
3 1 2.0 2
# Default ``dropna`` is set to True, which will exclude NaNs in keys
In [30]: df_dropna.groupby(by=["b"], dropna=True).sum()
Out[30]:
a c
b
1.0 2 3
2.0 2 5
# In order to allow NaN in keys, set ``dropna`` to False
In [31]: df_dropna.groupby(by=["b"], dropna=False).sum()
Out[31]:
a c
b
1.0 2 3
2.0 2 5
NaN 1 4
The default setting of dropna argument is True which means NA are not included in group keys.
GroupBy object attributes#
The groups attribute is a dict whose keys are the computed unique groups
and corresponding values being the axis labels belonging to each group. In the
above example we have:
In [32]: df.groupby("A").groups
Out[32]: {'bar': [1, 3, 5], 'foo': [0, 2, 4, 6, 7]}
In [33]: df.groupby(get_letter_type, axis=1).groups
Out[33]: {'consonant': ['B', 'C', 'D'], 'vowel': ['A']}
Calling the standard Python len function on the GroupBy object just returns
the length of the groups dict, so it is largely just a convenience:
In [34]: grouped = df.groupby(["A", "B"])
In [35]: grouped.groups
Out[35]: {('bar', 'one'): [1], ('bar', 'three'): [3], ('bar', 'two'): [5], ('foo', 'one'): [0, 6], ('foo', 'three'): [7], ('foo', 'two'): [2, 4]}
In [36]: len(grouped)
Out[36]: 6
GroupBy will tab complete column names (and other attributes):
In [37]: df
Out[37]:
height weight gender
2000-01-01 42.849980 157.500553 male
2000-01-02 49.607315 177.340407 male
2000-01-03 56.293531 171.524640 male
2000-01-04 48.421077 144.251986 female
2000-01-05 46.556882 152.526206 male
2000-01-06 68.448851 168.272968 female
2000-01-07 70.757698 136.431469 male
2000-01-08 58.909500 176.499753 female
2000-01-09 76.435631 174.094104 female
2000-01-10 45.306120 177.540920 male
In [38]: gb = df.groupby("gender")
In [39]: gb.<TAB> # noqa: E225, E999
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group gb.height gb.last gb.median gb.ngroups gb.plot gb.rank gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth gb.prod gb.resample gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size gb.tail gb.weight
GroupBy with MultiIndex#
With hierarchically-indexed data, it’s quite
natural to group by one of the levels of the hierarchy.
Let’s create a Series with a two-level MultiIndex.
In [40]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [41]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [42]: s = pd.Series(np.random.randn(8), index=index)
In [43]: s
Out[43]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64
We can then group by one of the levels in s.
In [44]: grouped = s.groupby(level=0)
In [45]: grouped.sum()
Out[45]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64
If the MultiIndex has names specified, these can be passed instead of the level
number:
In [46]: s.groupby(level="second").sum()
Out[46]:
second
one 0.980950
two 1.991575
dtype: float64
Grouping with multiple levels is supported.
In [47]: s
Out[47]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
foo bop one -0.932132
two 1.956030
qux bop one 0.017587
two -0.016692
dtype: float64
In [48]: s.groupby(level=["first", "second"]).sum()
Out[48]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
Index level names may be supplied as keys.
In [49]: s.groupby(["first", "second"]).sum()
Out[49]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
More on the sum function and aggregation later.
Grouping DataFrame with Index levels and columns#
A DataFrame may be grouped by a combination of columns and index levels by
specifying the column names as strings and the index levels as pd.Grouper
objects.
In [50]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [51]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [52]: df = pd.DataFrame({"A": [1, 1, 1, 1, 2, 2, 3, 3], "B": np.arange(8)}, index=index)
In [53]: df
Out[53]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
The following example groups df by the second index level and
the A column.
In [54]: df.groupby([pd.Grouper(level=1), "A"]).sum()
Out[54]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index levels may also be specified by name.
In [55]: df.groupby([pd.Grouper(level="second"), "A"]).sum()
Out[55]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index level names may be specified as keys directly to groupby.
In [56]: df.groupby(["second", "A"]).sum()
Out[56]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
DataFrame column selection in GroupBy#
Once you have created the GroupBy object from a DataFrame, you might want to do
something different for each of the columns. Thus, using [] similar to
getting a column from a DataFrame, you can do:
In [57]: df = pd.DataFrame(
....: {
....: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
....: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
....: "C": np.random.randn(8),
....: "D": np.random.randn(8),
....: }
....: )
....:
In [58]: df
Out[58]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [59]: grouped = df.groupby(["A"])
In [60]: grouped_C = grouped["C"]
In [61]: grouped_D = grouped["D"]
This is mainly syntactic sugar for the alternative and much more verbose:
In [62]: df["C"].groupby(df["A"])
Out[62]: <pandas.core.groupby.generic.SeriesGroupBy object at 0x7f1ea100a490>
Additionally this method avoids recomputing the internal grouping information
derived from the passed key.
Iterating through groups#
With the GroupBy object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [63]: grouped = df.groupby('A')
In [64]: for name, group in grouped:
....: print(name)
....: print(group)
....:
bar
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo
A B C D
0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In the case of grouping by multiple keys, the group name will be a tuple:
In [65]: for name, group in df.groupby(['A', 'B']):
....: print(name)
....: print(group)
....:
('bar', 'one')
A B C D
1 bar one 0.254161 1.511763
('bar', 'three')
A B C D
3 bar three 0.215897 -0.990582
('bar', 'two')
A B C D
5 bar two -0.077118 1.211526
('foo', 'one')
A B C D
0 foo one -0.575247 1.346061
6 foo one -0.408530 0.268520
('foo', 'three')
A B C D
7 foo three -0.862495 0.02458
('foo', 'two')
A B C D
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
See Iterating through groups.
Selecting a group#
A single group can be selected using
get_group():
In [66]: grouped.get_group("bar")
Out[66]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
Or for an object grouped on multiple columns:
In [67]: df.groupby(["A", "B"]).get_group(("bar", "one"))
Out[67]:
A B C D
1 bar one 0.254161 1.511763
Aggregation#
Once the GroupBy object has been created, several methods are available to
perform a computation on the grouped data. These operations are similar to the
aggregating API, window API,
and resample API.
An obvious one is aggregation via the
aggregate() or equivalently
agg() method:
In [68]: grouped = df.groupby("A")
In [69]: grouped[["C", "D"]].aggregate(np.sum)
Out[69]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [70]: grouped = df.groupby(["A", "B"])
In [71]: grouped.aggregate(np.sum)
Out[71]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429
As you can see, the result of the aggregation will have the group names as the
new index along the grouped axis. In the case of multiple keys, the result is a
MultiIndex by default, though this can be
changed by using the as_index option:
In [72]: grouped = df.groupby(["A", "B"], as_index=False)
In [73]: grouped.aggregate(np.sum)
Out[73]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
In [74]: df.groupby("A", as_index=False)[["C", "D"]].sum()
Out[74]:
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590
Note that you could use the reset_index DataFrame function to achieve the
same result as the column names are stored in the resulting MultiIndex:
In [75]: df.groupby(["A", "B"]).sum().reset_index()
Out[75]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
Another simple aggregation example is to compute the size of each group.
This is included in GroupBy as the size method. It returns a Series whose
index are the group names and whose values are the sizes of each group.
In [76]: grouped.size()
Out[76]:
A B size
0 bar one 1
1 bar three 1
2 bar two 1
3 foo one 2
4 foo three 1
5 foo two 2
In [77]: grouped.describe()
Out[77]:
C ... D
count mean std min ... 25% 50% 75% max
0 1.0 0.254161 NaN 0.254161 ... 1.511763 1.511763 1.511763 1.511763
1 1.0 0.215897 NaN 0.215897 ... -0.990582 -0.990582 -0.990582 -0.990582
2 1.0 -0.077118 NaN -0.077118 ... 1.211526 1.211526 1.211526 1.211526
3 2.0 -0.491888 0.117887 -0.575247 ... 0.537905 0.807291 1.076676 1.346061
4 1.0 -0.862495 NaN -0.862495 ... 0.024580 0.024580 0.024580 0.024580
5 2.0 0.024925 1.652692 -1.143704 ... 0.075531 0.592714 1.109898 1.627081
[6 rows x 16 columns]
Another aggregation example is to compute the number of unique values of each group. This is similar to the value_counts function, except that it only counts unique values.
In [78]: ll = [['foo', 1], ['foo', 2], ['foo', 2], ['bar', 1], ['bar', 1]]
In [79]: df4 = pd.DataFrame(ll, columns=["A", "B"])
In [80]: df4
Out[80]:
A B
0 foo 1
1 foo 2
2 foo 2
3 bar 1
4 bar 1
In [81]: df4.groupby("A")["B"].nunique()
Out[81]:
A
bar 1
foo 2
Name: B, dtype: int64
Note
Aggregation functions will not return the groups that you are aggregating over
if they are named columns, when as_index=True, the default. The grouped columns will
be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are
named columns.
Aggregating functions are the ones that reduce the dimension of the returned objects.
Some common aggregating functions are tabulated below:
Function
Description
mean()
Compute mean of groups
sum()
Compute sum of group values
size()
Compute group sizes
count()
Compute count of group
std()
Standard deviation of groups
var()
Compute variance of groups
sem()
Standard error of the mean of groups
describe()
Generates descriptive statistics
first()
Compute first of group values
last()
Compute last of group values
nth()
Take nth value, or a subset if n is a list
min()
Compute min of group values
max()
Compute max of group values
The aggregating functions above will exclude NA values. Any function which
reduces a Series to a scalar value is an aggregation function and will work,
a trivial example is df.groupby('A').agg(lambda ser: 1). Note that
nth() can act as a reducer or a
filter, see here.
Applying multiple functions at once#
With grouped Series you can also pass a list or dict of functions to do
aggregation with, outputting a DataFrame:
In [82]: grouped = df.groupby("A")
In [83]: grouped["C"].agg([np.sum, np.mean, np.std])
Out[83]:
sum mean std
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
On a grouped DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [84]: grouped[["C", "D"]].agg([np.sum, np.mean, np.std])
Out[84]:
C D
sum mean std sum mean std
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
The resulting aggregations are named for the functions themselves. If you
need to rename, then you can add in a chained operation for a Series like this:
In [85]: (
....: grouped["C"]
....: .agg([np.sum, np.mean, np.std])
....: .rename(columns={"sum": "foo", "mean": "bar", "std": "baz"})
....: )
....:
Out[85]:
foo bar baz
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
For a grouped DataFrame, you can rename in a similar manner:
In [86]: (
....: grouped[["C", "D"]].agg([np.sum, np.mean, np.std]).rename(
....: columns={"sum": "foo", "mean": "bar", "std": "baz"}
....: )
....: )
....:
Out[86]:
C D
foo bar baz foo bar baz
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
Note
In general, the output column names should be unique. You can’t apply
the same function (or two functions with the same name) to the same
column.
In [87]: grouped["C"].agg(["sum", "sum"])
Out[87]:
sum sum
A
bar 0.392940 0.392940
foo -1.796421 -1.796421
pandas does allow you to provide multiple lambdas. In this case, pandas
will mangle the name of the (nameless) lambda functions, appending _<i>
to each subsequent lambda.
In [88]: grouped["C"].agg([lambda x: x.max() - x.min(), lambda x: x.median() - x.mean()])
Out[88]:
<lambda_0> <lambda_1>
A
bar 0.331279 0.084917
foo 2.337259 -0.215962
Named aggregation#
New in version 0.25.0.
To support column-specific aggregation with control over the output column names, pandas
accepts the special syntax in GroupBy.agg(), known as “named aggregation”, where
The keywords are the output column names
The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column. pandas
provides the pandas.NamedAgg namedtuple with the fields ['column', 'aggfunc']
to make it clearer what the arguments are. As usual, the aggregation can
be a callable or a string alias.
In [89]: animals = pd.DataFrame(
....: {
....: "kind": ["cat", "dog", "cat", "dog"],
....: "height": [9.1, 6.0, 9.5, 34.0],
....: "weight": [7.9, 7.5, 9.9, 198.0],
....: }
....: )
....:
In [90]: animals
Out[90]:
kind height weight
0 cat 9.1 7.9
1 dog 6.0 7.5
2 cat 9.5 9.9
3 dog 34.0 198.0
In [91]: animals.groupby("kind").agg(
....: min_height=pd.NamedAgg(column="height", aggfunc="min"),
....: max_height=pd.NamedAgg(column="height", aggfunc="max"),
....: average_weight=pd.NamedAgg(column="weight", aggfunc=np.mean),
....: )
....:
Out[91]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
pandas.NamedAgg is just a namedtuple. Plain tuples are allowed as well.
In [92]: animals.groupby("kind").agg(
....: min_height=("height", "min"),
....: max_height=("height", "max"),
....: average_weight=("weight", np.mean),
....: )
....:
Out[92]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
If your desired output column names are not valid Python keywords, construct a dictionary
and unpack the keyword arguments
In [93]: animals.groupby("kind").agg(
....: **{
....: "total weight": pd.NamedAgg(column="weight", aggfunc=sum)
....: }
....: )
....:
Out[93]:
total weight
kind
cat 17.8
dog 205.5
Additional keyword arguments are not passed through to the aggregation functions. Only pairs
of (column, aggfunc) should be passed as **kwargs. If your aggregation functions
requires additional arguments, partially apply them with functools.partial().
Note
For Python 3.5 and earlier, the order of **kwargs in a functions was not
preserved. This means that the output column ordering would not be
consistent. To ensure consistent ordering, the keys (and so output columns)
will always be sorted for Python 3.5.
Named aggregation is also valid for Series groupby aggregations. In this case there’s
no column selection, so the values are just the functions.
In [94]: animals.groupby("kind").height.agg(
....: min_height="min",
....: max_height="max",
....: )
....:
Out[94]:
min_height max_height
kind
cat 9.1 9.5
dog 6.0 34.0
Applying different functions to DataFrame columns#
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [95]: grouped.agg({"C": np.sum, "D": lambda x: np.std(x, ddof=1)})
Out[95]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
The function names can also be strings. In order for a string to be valid it
must be either implemented on GroupBy or available via dispatching:
In [96]: grouped.agg({"C": "sum", "D": "std"})
Out[96]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
Cython-optimized aggregation functions#
Some common aggregations, currently only sum, mean, std, and sem, have
optimized Cython implementations:
In [97]: df.groupby("A")[["C", "D"]].sum()
Out[97]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [98]: df.groupby(["A", "B"]).mean()
Out[98]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714
Of course sum and mean are implemented on pandas objects, so the above
code would work even without the special versions via dispatching (see below).
Aggregations with User-Defined Functions#
Users can also provide their own functions for custom aggregations. When aggregating
with a User-Defined Function (UDF), the UDF should not mutate the provided Series, see
Mutating with User Defined Function (UDF) methods for more information.
In [99]: animals.groupby("kind")[["height"]].agg(lambda x: set(x))
Out[99]:
height
kind
cat {9.1, 9.5}
dog {34.0, 6.0}
The resulting dtype will reflect that of the aggregating function. If the results from different groups have
different dtypes, then a common dtype will be determined in the same way as DataFrame construction.
In [100]: animals.groupby("kind")[["height"]].agg(lambda x: x.astype(int).sum())
Out[100]:
height
kind
cat 18
dog 40
Transformation#
The transform method returns an object that is indexed the same
as the one being grouped. The transform function must:
Return a result that is either the same size as the group chunk or
broadcastable to the size of the group chunk (e.g., a scalar,
grouped.transform(lambda x: x.iloc[-1])).
Operate column-by-column on the group chunk. The transform is applied to
the first group chunk using chunk.apply.
Not perform in-place operations on the group chunk. Group chunks should
be treated as immutable, and changes to a group chunk may produce unexpected
results. For example, when using fillna, inplace must be False
(grouped.transform(lambda x: x.fillna(inplace=False))).
(Optionally) operates on the entire group chunk. If this is supported, a
fast path is used starting from the second chunk.
Deprecated since version 1.5.0: When using .transform on a grouped DataFrame and the transformation function
returns a DataFrame, currently pandas does not align the result’s index
with the input’s index. This behavior is deprecated and alignment will
be performed in a future version of pandas. You can apply .to_numpy() to the
result of the transformation function to avoid alignment.
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
transformation function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Suppose we wished to standardize the data within each group:
In [101]: index = pd.date_range("10/1/1999", periods=1100)
In [102]: ts = pd.Series(np.random.normal(0.5, 2, 1100), index)
In [103]: ts = ts.rolling(window=100, min_periods=100).mean().dropna()
In [104]: ts.head()
Out[104]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64
In [105]: ts.tail()
Out[105]:
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64
In [106]: transformed = ts.groupby(lambda x: x.year).transform(
.....: lambda x: (x - x.mean()) / x.std()
.....: )
.....:
We would expect the result to now have mean 0 and standard deviation 1 within
each group, which we can easily check:
# Original Data
In [107]: grouped = ts.groupby(lambda x: x.year)
In [108]: grouped.mean()
Out[108]:
2000 0.442441
2001 0.526246
2002 0.459365
dtype: float64
In [109]: grouped.std()
Out[109]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64
# Transformed Data
In [110]: grouped_trans = transformed.groupby(lambda x: x.year)
In [111]: grouped_trans.mean()
Out[111]:
2000 -4.870756e-16
2001 -1.545187e-16
2002 4.136282e-16
dtype: float64
In [112]: grouped_trans.std()
Out[112]:
2000 1.0
2001 1.0
2002 1.0
dtype: float64
We can also visually compare the original and transformed data sets.
In [113]: compare = pd.DataFrame({"Original": ts, "Transformed": transformed})
In [114]: compare.plot()
Out[114]: <AxesSubplot: >
Transformation functions that have lower dimension outputs are broadcast to
match the shape of the input array.
In [115]: ts.groupby(lambda x: x.year).transform(lambda x: x.max() - x.min())
Out[115]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Alternatively, the built-in methods could be used to produce the same outputs.
In [116]: max_ts = ts.groupby(lambda x: x.year).transform("max")
In [117]: min_ts = ts.groupby(lambda x: x.year).transform("min")
In [118]: max_ts - min_ts
Out[118]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Another common data transform is to replace missing data with the group mean.
In [119]: data_df
Out[119]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 NaN
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
In [120]: countries = np.array(["US", "UK", "GR", "JP"])
In [121]: key = countries[np.random.randint(0, 4, 1000)]
In [122]: grouped = data_df.groupby(key)
# Non-NA count in each group
In [123]: grouped.count()
Out[123]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [124]: transformed = grouped.transform(lambda x: x.fillna(x.mean()))
We can verify that the group means have not changed in the transformed data
and that the transformed data contains no NAs.
In [125]: grouped_trans = transformed.groupby(key)
In [126]: grouped.mean() # original group means
Out[126]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [127]: grouped_trans.mean() # transformation did not change group means
Out[127]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [128]: grouped.count() # original has some missing data points
Out[128]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [129]: grouped_trans.count() # counts after transformation
Out[129]:
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
US 258 258 258
In [130]: grouped_trans.size() # Verify non-NA count equals group size
Out[130]:
GR 228
JP 267
UK 247
US 258
dtype: int64
Note
Some functions will automatically transform the input when applied to a
GroupBy object, but returning an object of the same shape as the original.
Passing as_index=False will not affect these transformation methods.
For example: fillna, ffill, bfill, shift..
In [131]: grouped.ffill()
Out[131]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 0.533026
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 -0.774753
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 -0.774753
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
Window and resample operations#
It is possible to use resample(), expanding() and
rolling() as methods on groupbys.
The example below will apply the rolling() method on the samples of
the column B based on the groups of column A.
In [132]: df_re = pd.DataFrame({"A": [1] * 10 + [5] * 10, "B": np.arange(20)})
In [133]: df_re
Out[133]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
.. .. ..
15 5 15
16 5 16
17 5 17
18 5 18
19 5 19
[20 rows x 2 columns]
In [134]: df_re.groupby("A").rolling(4).B.mean()
Out[134]:
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
...
5 15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64
The expanding() method will accumulate a given operation
(sum() in the example) for all the members of each particular
group.
In [135]: df_re.groupby("A").expanding().sum()
Out[135]:
B
A
1 0 0.0
1 1.0
2 3.0
3 6.0
4 10.0
... ...
5 15 75.0
16 91.0
17 108.0
18 126.0
19 145.0
[20 rows x 1 columns]
Suppose you want to use the resample() method to get a daily
frequency in each group of your dataframe and wish to complete the
missing values with the ffill() method.
In [136]: df_re = pd.DataFrame(
.....: {
.....: "date": pd.date_range(start="2016-01-01", periods=4, freq="W"),
.....: "group": [1, 1, 2, 2],
.....: "val": [5, 6, 7, 8],
.....: }
.....: ).set_index("date")
.....:
In [137]: df_re
Out[137]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [138]: df_re.groupby("group").resample("1D").ffill()
Out[138]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
... ... ...
2 2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
[16 rows x 2 columns]
Filtration#
The filter method returns a subset of the original object. Suppose we
want to take only elements that belong to groups with a group sum greater
than 2.
In [139]: sf = pd.Series([1, 1, 2, 3, 3, 3])
In [140]: sf.groupby(sf).filter(lambda x: x.sum() > 2)
Out[140]:
3 3
4 3
5 3
dtype: int64
The argument of filter must be a function that, applied to the group as a
whole, returns True or False.
Another useful operation is filtering out elements that belong to groups
with only a couple members.
In [141]: dff = pd.DataFrame({"A": np.arange(8), "B": list("aabbbbcc")})
In [142]: dff.groupby("B").filter(lambda x: len(x) > 2)
Out[142]:
A B
2 2 b
3 3 b
4 4 b
5 5 b
Alternatively, instead of dropping the offending groups, we can return a
like-indexed objects where the groups that do not pass the filter are filled
with NaNs.
In [143]: dff.groupby("B").filter(lambda x: len(x) > 2, dropna=False)
Out[143]:
A B
0 NaN NaN
1 NaN NaN
2 2.0 b
3 3.0 b
4 4.0 b
5 5.0 b
6 NaN NaN
7 NaN NaN
For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
In [144]: dff["C"] = np.arange(8)
In [145]: dff.groupby("B").filter(lambda x: len(x["C"]) > 2)
Out[145]:
A B C
2 2 b 2
3 3 b 3
4 4 b 4
5 5 b 5
Note
Some functions when applied to a groupby object will act as a filter on the input, returning
a reduced shape of the original (and potentially eliminating groups), but with the index unchanged.
Passing as_index=False will not affect these transformation methods.
For example: head, tail.
In [146]: dff.groupby("B").head(2)
Out[146]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7
Dispatching to instance methods#
When doing an aggregation or transformation, you might just want to call an
instance method on each data group. This is pretty easy to do by passing lambda
functions:
In [147]: grouped = df.groupby("A")
In [148]: grouped.agg(lambda x: x.std())
Out[148]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
But, it’s rather verbose and can be untidy if you need to pass additional
arguments. Using a bit of metaprogramming cleverness, GroupBy now has the
ability to “dispatch” method calls to the groups:
In [149]: grouped.std()
Out[149]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
What is actually happening here is that a function wrapper is being
generated. When invoked, it takes any passed arguments and invokes the function
with any arguments on each group (in the above example, the std
function). The results are then combined together much in the style of agg
and transform (it actually uses apply to infer the gluing, documented
next). This enables some operations to be carried out rather succinctly:
In [150]: tsdf = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2000", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [151]: tsdf.iloc[::2] = np.nan
In [152]: grouped = tsdf.groupby(lambda x: x.year)
In [153]: grouped.fillna(method="pad")
Out[153]:
A B C
2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
... ... ... ...
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135
[1000 rows x 3 columns]
In this example, we chopped the collection of time series into yearly chunks
then independently called fillna on the
groups.
The nlargest and nsmallest methods work on Series style groupbys:
In [154]: s = pd.Series([9, 8, 7, 5, 19, 1, 4.2, 3.3])
In [155]: g = pd.Series(list("abababab"))
In [156]: gb = s.groupby(g)
In [157]: gb.nlargest(3)
Out[157]:
a 4 19.0
0 9.0
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64
In [158]: gb.nsmallest(3)
Out[158]:
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64
Flexible apply#
Some operations on the grouped data might not fit into either the aggregate or
transform categories. Or, you may simply want GroupBy to infer how to combine
the results. For these, use the apply function, which can be substituted
for both aggregate and transform in many standard use cases. However,
apply can handle some exceptional use cases.
Note
apply can act as a reducer, transformer, or filter function, depending
on exactly what is passed to it. It can depend on the passed function and
exactly what you are grouping. Thus the grouped column(s) may be included in
the output as well as set the indices.
In [159]: df
Out[159]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [160]: grouped = df.groupby("A")
# could also just call .describe()
In [161]: grouped["C"].apply(lambda x: x.describe())
Out[161]:
A
bar count 3.000000
mean 0.130980
std 0.181231
min -0.077118
25% 0.069390
...
foo min -1.143704
25% -0.862495
50% -0.575247
75% -0.408530
max 1.193555
Name: C, Length: 16, dtype: float64
The dimension of the returned result can also change:
In [162]: grouped = df.groupby('A')['C']
In [163]: def f(group):
.....: return pd.DataFrame({'original': group,
.....: 'demeaned': group - group.mean()})
.....:
apply on a Series can operate on a returned value from the applied function,
that is itself a series, and possibly upcast the result to a DataFrame:
In [164]: def f(x):
.....: return pd.Series([x, x ** 2], index=["x", "x^2"])
.....:
In [165]: s = pd.Series(np.random.rand(5))
In [166]: s
Out[166]:
0 0.321438
1 0.493496
2 0.139505
3 0.910103
4 0.194158
dtype: float64
In [167]: s.apply(f)
Out[167]:
x x^2
0 0.321438 0.103323
1 0.493496 0.243538
2 0.139505 0.019462
3 0.910103 0.828287
4 0.194158 0.037697
Control grouped column(s) placement with group_keys#
Note
If group_keys=True is specified when calling groupby(),
functions passed to apply that return like-indexed outputs will have the
group keys added to the result index. Previous versions of pandas would add
the group keys only when the result from the applied function had a different
index than the input. If group_keys is not specified, the group keys will
not be added for like-indexed outputs. In the future this behavior
will change to always respect group_keys, which defaults to True.
Changed in version 1.5.0.
To control whether the grouped column(s) are included in the indices, you can use
the argument group_keys. Compare
In [168]: df.groupby("A", group_keys=True).apply(lambda x: x)
Out[168]:
A B C D
A
bar 1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo 0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
with
In [169]: df.groupby("A", group_keys=False).apply(lambda x: x)
Out[169]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
apply function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Numba Accelerated Routines#
New in version 1.1.
If Numba is installed as an optional dependency, the transform and
aggregate methods support engine='numba' and engine_kwargs arguments.
See enhancing performance with Numba for general usage of the arguments
and performance considerations.
The function signature must start with values, index exactly as the data belonging to each group
will be passed into values, and the group index will be passed into index.
Warning
When using engine='numba', there will be no “fall back” behavior internally. The group
data and group index will be passed as NumPy arrays to the JITed user defined function, and no
alternative execution attempts will be tried.
Other useful features#
Automatic exclusion of “nuisance” columns#
Again consider the example DataFrame we’ve been looking at:
In [170]: df
Out[170]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Suppose we wish to compute the standard deviation grouped by the A
column. There is a slight problem, namely that we don’t care about the data in
column B. We refer to this as a “nuisance” column. You can avoid nuisance
columns by specifying numeric_only=True:
In [171]: df.groupby("A").std(numeric_only=True)
Out[171]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
Note that df.groupby('A').colname.std(). is more efficient than
df.groupby('A').std().colname, so if the result of an aggregation function
is only interesting over one column (here colname), it may be filtered
before applying the aggregation function.
Note
Any object column, also if it contains numerical values such as Decimal
objects, is considered as a “nuisance” columns. They are excluded from
aggregate functions automatically in groupby.
If you do wish to include decimal or object columns in an aggregation with
other non-nuisance data types, you must do so explicitly.
Warning
The automatic dropping of nuisance columns has been deprecated and will be removed
in a future version of pandas. If columns are included that cannot be operated
on, pandas will instead raise an error. In order to avoid this, either select
the columns you wish to operate on or specify numeric_only=True.
In [172]: from decimal import Decimal
In [173]: df_dec = pd.DataFrame(
.....: {
.....: "id": [1, 2, 1, 2],
.....: "int_column": [1, 2, 3, 4],
.....: "dec_column": [
.....: Decimal("0.50"),
.....: Decimal("0.15"),
.....: Decimal("0.25"),
.....: Decimal("0.40"),
.....: ],
.....: }
.....: )
.....:
# Decimal columns can be sum'd explicitly by themselves...
In [174]: df_dec.groupby(["id"])[["dec_column"]].sum()
Out[174]:
dec_column
id
1 0.75
2 0.55
# ...but cannot be combined with standard data types or they will be excluded
In [175]: df_dec.groupby(["id"])[["int_column", "dec_column"]].sum()
Out[175]:
int_column
id
1 4
2 6
# Use .agg function to aggregate over standard and "nuisance" data types
# at the same time
In [176]: df_dec.groupby(["id"]).agg({"int_column": "sum", "dec_column": "sum"})
Out[176]:
int_column dec_column
id
1 4 0.75
2 6 0.55
Handling of (un)observed Categorical values#
When using a Categorical grouper (as a single grouper, or as part of multiple groupers), the observed keyword
controls whether to return a cartesian product of all possible groupers values (observed=False) or only those
that are observed groupers (observed=True).
Show all values:
In [177]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False
.....: ).count()
.....:
Out[177]:
a 3
b 0
dtype: int64
Show only the observed values:
In [178]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=True
.....: ).count()
.....:
Out[178]:
a 3
dtype: int64
The returned dtype of the grouped will always include all of the categories that were grouped.
In [179]: s = (
.....: pd.Series([1, 1, 1])
.....: .groupby(pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False)
.....: .count()
.....: )
.....:
In [180]: s.index.dtype
Out[180]: CategoricalDtype(categories=['a', 'b'], ordered=False)
NA and NaT group handling#
If there are any NaN or NaT values in the grouping key, these will be
automatically excluded. In other words, there will never be an “NA group” or
“NaT group”. This was not the case in older versions of pandas, but users were
generally discarding the NA group anyway (and supporting it was an
implementation headache).
Grouping with ordered factors#
Categorical variables represented as instance of pandas’s Categorical class
can be used as group keys. If so, the order of the levels will be preserved:
In [181]: data = pd.Series(np.random.randn(100))
In [182]: factor = pd.qcut(data, [0, 0.25, 0.5, 0.75, 1.0])
In [183]: data.groupby(factor).mean()
Out[183]:
(-2.645, -0.523] -1.362896
(-0.523, 0.0296] -0.260266
(0.0296, 0.654] 0.361802
(0.654, 2.21] 1.073801
dtype: float64
Grouping with a grouper specification#
You may need to specify a bit more data to properly group. You can
use the pd.Grouper to provide this local control.
In [184]: import datetime
In [185]: df = pd.DataFrame(
.....: {
.....: "Branch": "A A A A A A A B".split(),
.....: "Buyer": "Carl Mark Carl Carl Joe Joe Joe Carl".split(),
.....: "Quantity": [1, 3, 5, 1, 8, 1, 9, 3],
.....: "Date": [
.....: datetime.datetime(2013, 1, 1, 13, 0),
.....: datetime.datetime(2013, 1, 1, 13, 5),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 12, 2, 12, 0),
.....: datetime.datetime(2013, 12, 2, 14, 0),
.....: ],
.....: }
.....: )
.....:
In [186]: df
Out[186]:
Branch Buyer Quantity Date
0 A Carl 1 2013-01-01 13:00:00
1 A Mark 3 2013-01-01 13:05:00
2 A Carl 5 2013-10-01 20:00:00
3 A Carl 1 2013-10-02 10:00:00
4 A Joe 8 2013-10-01 20:00:00
5 A Joe 1 2013-10-02 10:00:00
6 A Joe 9 2013-12-02 12:00:00
7 B Carl 3 2013-12-02 14:00:00
Groupby a specific column with the desired frequency. This is like resampling.
In [187]: df.groupby([pd.Grouper(freq="1M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[187]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9
You have an ambiguous specification in that you have a named index and a column
that could be potential groupers.
In [188]: df = df.set_index("Date")
In [189]: df["Date"] = df.index + pd.offsets.MonthEnd(2)
In [190]: df.groupby([pd.Grouper(freq="6M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[190]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18
In [191]: df.groupby([pd.Grouper(freq="6M", level="Date"), "Buyer"])[["Quantity"]].sum()
Out[191]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18
Taking the first rows of each group#
Just like for a DataFrame or Series you can call head and tail on a groupby:
In [192]: df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"])
In [193]: df
Out[193]:
A B
0 1 2
1 1 4
2 5 6
In [194]: g = df.groupby("A")
In [195]: g.head(1)
Out[195]:
A B
0 1 2
2 5 6
In [196]: g.tail(1)
Out[196]:
A B
1 1 4
2 5 6
This shows the first or last n rows from each group.
Taking the nth row of each group#
To select from a DataFrame or Series the nth item, use
nth(). This is a reduction method, and
will return a single row (or no row) per group if you pass an int for n:
In [197]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [198]: g = df.groupby("A")
In [199]: g.nth(0)
Out[199]:
B
A
1 NaN
5 6.0
In [200]: g.nth(-1)
Out[200]:
B
A
1 4.0
5 6.0
In [201]: g.nth(1)
Out[201]:
B
A
1 4.0
If you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or 'all' just like you would pass to dropna:
# nth(0) is the same as g.first()
In [202]: g.nth(0, dropna="any")
Out[202]:
B
A
1 4.0
5 6.0
In [203]: g.first()
Out[203]:
B
A
1 4.0
5 6.0
# nth(-1) is the same as g.last()
In [204]: g.nth(-1, dropna="any") # NaNs denote group exhausted when using dropna
Out[204]:
B
A
1 4.0
5 6.0
In [205]: g.last()
Out[205]:
B
A
1 4.0
5 6.0
In [206]: g.B.nth(0, dropna="all")
Out[206]:
A
1 4.0
5 6.0
Name: B, dtype: float64
As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.
In [207]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [208]: g = df.groupby("A", as_index=False)
In [209]: g.nth(0)
Out[209]:
A B
0 1 NaN
2 5 6.0
In [210]: g.nth(-1)
Out[210]:
A B
1 1 4.0
2 5 6.0
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
In [211]: business_dates = pd.date_range(start="4/1/2014", end="6/30/2014", freq="B")
In [212]: df = pd.DataFrame(1, index=business_dates, columns=["a", "b"])
# get the first, 4th, and last date index for each month
In [213]: df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
Out[213]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
Enumerate group items#
To see the order in which each row appears within its group, use the
cumcount method:
In [214]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [215]: dfg
Out[215]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [216]: dfg.groupby("A").cumcount()
Out[216]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
In [217]: dfg.groupby("A").cumcount(ascending=False)
Out[217]:
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
Enumerate groups#
To see the ordering of the groups (as opposed to the order of rows
within a group given by cumcount) you can use
ngroup().
Note that the numbers given to the groups match the order in which the
groups would be seen when iterating over the groupby object, not the
order they are first observed.
In [218]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [219]: dfg
Out[219]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [220]: dfg.groupby("A").ngroup()
Out[220]:
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
In [221]: dfg.groupby("A").ngroup(ascending=False)
Out[221]:
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
Plotting#
Groupby also works with some plotting methods. For example, suppose we
suspect that some features in a DataFrame may differ by group, in this case,
the values in column 1 where the group is “B” are 3 higher on average.
In [222]: np.random.seed(1234)
In [223]: df = pd.DataFrame(np.random.randn(50, 2))
In [224]: df["g"] = np.random.choice(["A", "B"], size=50)
In [225]: df.loc[df["g"] == "B", 1] += 3
We can easily visualize this with a boxplot:
In [226]: df.groupby("g").boxplot()
Out[226]:
A AxesSubplot(0.1,0.15;0.363636x0.75)
B AxesSubplot(0.536364,0.15;0.363636x0.75)
dtype: object
The result of calling boxplot is a dictionary whose keys are the values
of our grouping column g (“A” and “B”). The values of the resulting dictionary
can be controlled by the return_type keyword of boxplot.
See the visualization documentation for more.
Warning
For historical reasons, df.groupby("g").boxplot() is not equivalent
to df.boxplot(by="g"). See here for
an explanation.
Piping function calls#
Similar to the functionality provided by DataFrame and Series, functions
that take GroupBy objects can be chained together using a pipe method to
allow for a cleaner, more readable syntax. To read about .pipe in general terms,
see here.
Combining .groupby and .pipe is often useful when you need to reuse
GroupBy objects.
As an example, imagine having a DataFrame with columns for stores, products,
revenue and quantity sold. We’d like to do a groupwise calculation of prices
(i.e. revenue/quantity) per store and per product. We could do this in a
multi-step operation, but expressing it in terms of piping can make the
code more readable. First we set the data:
In [227]: n = 1000
In [228]: df = pd.DataFrame(
.....: {
.....: "Store": np.random.choice(["Store_1", "Store_2"], n),
.....: "Product": np.random.choice(["Product_1", "Product_2"], n),
.....: "Revenue": (np.random.random(n) * 50 + 10).round(2),
.....: "Quantity": np.random.randint(1, 10, size=n),
.....: }
.....: )
.....:
In [229]: df.head(2)
Out[229]:
Store Product Revenue Quantity
0 Store_2 Product_1 26.12 1
1 Store_2 Product_1 28.86 1
Now, to find prices per store/product, we can simply do:
In [230]: (
.....: df.groupby(["Store", "Product"])
.....: .pipe(lambda grp: grp.Revenue.sum() / grp.Quantity.sum())
.....: .unstack()
.....: .round(2)
.....: )
.....:
Out[230]:
Product Product_1 Product_2
Store
Store_1 6.82 7.05
Store_2 6.30 6.64
Piping can also be expressive when you want to deliver a grouped object to some
arbitrary function, for example:
In [231]: def mean(groupby):
.....: return groupby.mean()
.....:
In [232]: df.groupby(["Store", "Product"]).pipe(mean)
Out[232]:
Revenue Quantity
Store Product
Store_1 Product_1 34.622727 5.075758
Product_2 35.482815 5.029630
Store_2 Product_1 32.972837 5.237589
Product_2 34.684360 5.224000
where mean takes a GroupBy object and finds the mean of the Revenue and Quantity
columns respectively for each Store-Product combination. The mean function can
be any function that takes in a GroupBy object; the .pipe will pass the GroupBy
object as a parameter into the function you specify.
Examples#
Regrouping by factor#
Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [233]: df = pd.DataFrame({"a": [1, 0, 0], "b": [0, 1, 0], "c": [1, 0, 0], "d": [2, 3, 4]})
In [234]: df
Out[234]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4
In [235]: df.groupby(df.sum(), axis=1).sum()
Out[235]:
1 9
0 2 2
1 1 3
2 0 4
Multi-column factorization#
By using ngroup(), we can extract
information about the groups in a way similar to factorize() (as described
further in the reshaping API) but which applies
naturally to multiple columns of mixed type and different
sources. This can be useful as an intermediate categorical-like step
in processing, when the relationships between the group rows are more
important than their content, or as input to an algorithm which only
accepts the integer encoding. (For more information about support in
pandas for full categorical data, see the Categorical
introduction and the
API documentation.)
In [236]: dfg = pd.DataFrame({"A": [1, 1, 2, 3, 2], "B": list("aaaba")})
In [237]: dfg
Out[237]:
A B
0 1 a
1 1 a
2 2 a
3 3 b
4 2 a
In [238]: dfg.groupby(["A", "B"]).ngroup()
Out[238]:
0 0
1 0
2 1
3 2
4 1
dtype: int64
In [239]: dfg.groupby(["A", [0, 0, 0, 1, 1]]).ngroup()
Out[239]:
0 0
1 0
2 1
3 3
4 2
dtype: int64
Groupby by indexer to ‘resample’ data#
Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the groupby operation.
Note
The below example shows how we can downsample by consolidation of samples into fewer samples. Here by using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
In [240]: df = pd.DataFrame(np.random.randn(10, 2))
In [241]: df
Out[241]:
0 1
0 -0.793893 0.321153
1 0.342250 1.618906
2 -0.975807 1.918201
3 -0.810847 -1.405919
4 -1.977759 0.461659
5 0.730057 -1.316938
6 -0.751328 0.528290
7 -0.257759 -1.081009
8 0.505895 -1.701948
9 -1.006349 0.020208
In [242]: df.index // 5
Out[242]: Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')
In [243]: df.groupby(df.index // 5).std()
Out[243]:
0 1
0 0.823647 1.312912
1 0.760109 0.942941
Returning a Series to propagate names#
Group DataFrame columns, compute a set of metrics and return a named Series.
The Series name is used as the name for the column index. This is especially
useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [244]: df = pd.DataFrame(
.....: {
.....: "a": [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: "b": [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: "c": [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: "d": [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
.....: }
.....: )
.....:
In [245]: def compute_metrics(x):
.....: result = {"b_sum": x["b"].sum(), "c_mean": x["c"].mean()}
.....: return pd.Series(result, name="metrics")
.....:
In [246]: result = df.groupby("a").apply(compute_metrics)
In [247]: result
Out[247]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5
In [248]: result.stack()
Out[248]:
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64
| 32 | 671 | Filtering only 1 column in a df without returning the entire DF in 1 line
I'm hoping that there is a way i can return a series from df while im filtering it in 1 line.
Is there a way I could return a column from my df after I filter it?
Currently my process is something like this
df = df[df['a'] > 0 ]
list = df['a']
| The df.loc syntax is the preferred way to do this, as @JohnM wrote in his comment, though I find the syntax from @Don'tAccept more readable and scaleable however since it can handle cases like column names with spaces in them. These combine like:
df.loc[df['a'] > 0, 'a']
Note this is expandable to provide multiple columns, for example if you wanted columns 'a' and 'b' you would do:
df.loc[df['a'] > 0, ['a', 'b']]
Lastly, you can verify that df.a and df['a'] are the same by checking
in: df.a is df['a']
out: True
The is here (as opposed to ==) means df.a and df['a'] point to the same object in memory, so they are interchangeable. |
|
64,239,252 | Time series data merge nearest right dataset has multiple same values | <p>I have two dataframes. The first is like a log while the second is like inputs. I want to combine this log and inputs based on their time columns.</p>
<p>I tried using <code>merge_asof</code> but it only takes one input into the input dateframe.</p>
<p>Here is an example. Dataframe Log Times, <code>log</code>:</p>
<pre><code> STARTTIME_Log
2020-05-28 21:57:27.000000
2020-05-28 06:35:20.000000
2020-05-28 19:51:39.000000
2020-05-28 20:43:23.000000
</code></pre>
<p>DataFrame İnput Times and Values, <code>input</code>:</p>
<pre><code> IO_Time IOName value
2020-05-28 21:57:35.037 A 79.65
2020-05-28 21:57:35.037 B 33.33
2020-05-28 06:35:22.037 A 27.53
2020-05-28 06:35:22.037 B 6.23
2020-05-28 09:30:20.037 A 43.50
2020-05-28 09:30:20.037 B 15.23
2020-05-28 19:51:40.037 A 100.00
2020-05-28 19:51:40.037 B 12.52
2020-05-28 20:43:25.037 A 56.43
2020-05-28 20:43:25.037 B 2.67
2020-05-28 22:32:56.037 A 23.45
2020-05-28 22:32:56.037 B 3.55
</code></pre>
<p>Expected Output:</p>
<pre><code> STARTTIME_Log IOName value
2020-05-28 21:57:27.000000 A 79.65
2020-05-28 21:57:27.000000 B 33.33
2020-05-28 06:35:20.000000 A 27.53
2020-05-28 06:35:20.000000 B 6.23
2020-05-28 19:51:39.000000 A 100.00
2020-05-28 19:51:39.000000 B 12.52
2020-05-28 20:43:23.000000 A 56.43
2020-05-28 20:43:23.000000 B 2.67
</code></pre>
<p>The output merges the <code>log</code> and <code>input</code> dataframes in the nearest time.
The merge is done on <code>STARTTIME_Log</code> for the <code>log</code> dataframe and <code>IO_Time</code> on <code>input</code>.
If there is too large a difference then the rows are dropped.</p>
<p>How can I do that?</p> | 64,239,730 | 2020-10-07T07:28:00.140000 | 1 | null | 0 | 42 | python|pandas | <p>First, make sure that the <code>IO_Time</code> and <code>STARTTIME_Log</code> columns are of datetime type and are sorted (required to use <code>merge_asof</code>:</p>
<pre><code>log['STARTTIME_Log'] = pd.to_datetime(log['STARTTIME_Log'])
input['IO_Time'] = pd.to_datetime(input['IO_Time'])
log = log.sort_values('STARTTIME_Log')
input = input.sort_values('IO_Time')
</code></pre>
<p>Now, use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge_asof.html" rel="nofollow noreferrer"><code>merge_asof</code></a> with <code>input</code> as the left dataframe and <code>log</code> as the right. Note that you need to specify an acceptable tolerance value (I set it to 10 seconds here):</p>
<pre><code>tol = pd.Timedelta('10s')
df = pd.merge_asof(input, log, left_on='IO_Time', right_on='STARTTIME_Log', tolerance=tol, direction='nearest')
df = df.dropna(subset=['STARTTIME_Log']).drop(columns='IO_Time')
</code></pre>
<p>Afterwards, the rows that don't have a match in <code>log</code> are dropped and the <code>IO_Time</code> column is removed.</p>
<p>Result:</p>
<pre><code> IOName value STARTTIME_Log
0 A 27.53 2020-05-28 06:35:20
1 B 6.23 2020-05-28 06:35:20
4 A 100.00 2020-05-28 19:51:39
5 B 12.52 2020-05-28 19:51:39
6 A 56.43 2020-05-28 20:43:23
7 B 2.67 2020-05-28 20:43:23
8 A 79.65 2020-05-28 21:57:27
9 B 33.33 2020-05-28 21:57:27
</code></pre> | 2020-10-07T08:00:18.793000 | 0 | https://pandas.pydata.org/docs/dev/user_guide/merging.html | Merge, join, concatenate and compare#
First, make sure that the IO_Time and STARTTIME_Log columns are of datetime type and are sorted (required to use merge_asof:
log['STARTTIME_Log'] = pd.to_datetime(log['STARTTIME_Log'])
input['IO_Time'] = pd.to_datetime(input['IO_Time'])
log = log.sort_values('STARTTIME_Log')
input = input.sort_values('IO_Time')
Now, use merge_asof with input as the left dataframe and log as the right. Note that you need to specify an acceptable tolerance value (I set it to 10 seconds here):
tol = pd.Timedelta('10s')
df = pd.merge_asof(input, log, left_on='IO_Time', right_on='STARTTIME_Log', tolerance=tol, direction='nearest')
df = df.dropna(subset=['STARTTIME_Log']).drop(columns='IO_Time')
Afterwards, the rows that don't have a match in log are dropped and the IO_Time column is removed.
Result:
IOName value STARTTIME_Log
0 A 27.53 2020-05-28 06:35:20
1 B 6.23 2020-05-28 06:35:20
4 A 100.00 2020-05-28 19:51:39
5 B 12.52 2020-05-28 19:51:39
6 A 56.43 2020-05-28 20:43:23
7 B 2.67 2020-05-28 20:43:23
8 A 79.65 2020-05-28 21:57:27
9 B 33.33 2020-05-28 21:57:27
Merge, join, concatenate and compare#
pandas provides various facilities for easily combining together Series or
DataFrame with various kinds of set logic for the indexes
and relational algebra functionality in the case of join / merge-type
operations.
In addition, pandas also provides utilities to compare two Series or DataFrame
and summarize their differences.
Concatenating objects#
The concat() function (in the main pandas namespace) does all of
the heavy lifting of performing concatenation operations along an axis while
performing optional set logic (union or intersection) of the indexes (if any) on
the other axes. Note that I say “if any” because there is only a single possible
axis of concatenation for Series.
Before diving into all of the details of concat and what it can do, here is
a simple example:
In [1]: df1 = pd.DataFrame(
...: {
...: "A": ["A0", "A1", "A2", "A3"],
...: "B": ["B0", "B1", "B2", "B3"],
...: "C": ["C0", "C1", "C2", "C3"],
...: "D": ["D0", "D1", "D2", "D3"],
...: },
...: index=[0, 1, 2, 3],
...: )
...:
In [2]: df2 = pd.DataFrame(
...: {
...: "A": ["A4", "A5", "A6", "A7"],
...: "B": ["B4", "B5", "B6", "B7"],
...: "C": ["C4", "C5", "C6", "C7"],
...: "D": ["D4", "D5", "D6", "D7"],
...: },
...: index=[4, 5, 6, 7],
...: )
...:
In [3]: df3 = pd.DataFrame(
...: {
...: "A": ["A8", "A9", "A10", "A11"],
...: "B": ["B8", "B9", "B10", "B11"],
...: "C": ["C8", "C9", "C10", "C11"],
...: "D": ["D8", "D9", "D10", "D11"],
...: },
...: index=[8, 9, 10, 11],
...: )
...:
In [4]: frames = [df1, df2, df3]
In [5]: result = pd.concat(frames)
Like its sibling function on ndarrays, numpy.concatenate, pandas.concat
takes a list or dict of homogeneously-typed objects and concatenates them with
some configurable handling of “what to do with the other axes”:
pd.concat(
objs,
axis=0,
join="outer",
ignore_index=False,
keys=None,
levels=None,
names=None,
verify_integrity=False,
copy=True,
)
objs : a sequence or mapping of Series or DataFrame objects. If a
dict is passed, the sorted keys will be used as the keys argument, unless
it is passed, in which case the values will be selected (see below). Any None
objects will be dropped silently unless they are all None in which case a
ValueError will be raised.
axis : {0, 1, …}, default 0. The axis to concatenate along.
join : {‘inner’, ‘outer’}, default ‘outer’. How to handle indexes on
other axis(es). Outer for union and inner for intersection.
ignore_index : boolean, default False. If True, do not use the index
values on the concatenation axis. The resulting axis will be labeled 0, …,
n - 1. This is useful if you are concatenating objects where the
concatenation axis does not have meaningful indexing information. Note
the index values on the other axes are still respected in the join.
keys : sequence, default None. Construct hierarchical index using the
passed keys as the outermost level. If multiple levels passed, should
contain tuples.
levels : list of sequences, default None. Specific levels (unique values)
to use for constructing a MultiIndex. Otherwise they will be inferred from the
keys.
names : list, default None. Names for the levels in the resulting
hierarchical index.
verify_integrity : boolean, default False. Check whether the new
concatenated axis contains duplicates. This can be very expensive relative
to the actual data concatenation.
copy : boolean, default True. If False, do not copy data unnecessarily.
Without a little bit of context many of these arguments don’t make much sense.
Let’s revisit the above example. Suppose we wanted to associate specific keys
with each of the pieces of the chopped up DataFrame. We can do this using the
keys argument:
In [6]: result = pd.concat(frames, keys=["x", "y", "z"])
As you can see (if you’ve read the rest of the documentation), the resulting
object’s index has a hierarchical index. This
means that we can now select out each chunk by key:
In [7]: result.loc["y"]
Out[7]:
A B C D
4 A4 B4 C4 D4
5 A5 B5 C5 D5
6 A6 B6 C6 D6
7 A7 B7 C7 D7
It’s not a stretch to see how this can be very useful. More detail on this
functionality below.
Note
It is worth noting that concat() makes a full copy of the data, and that constantly
reusing this function can create a significant performance hit. If you need
to use the operation over several datasets, use a list comprehension.
frames = [ process_your_file(f) for f in files ]
result = pd.concat(frames)
Note
When concatenating DataFrames with named axes, pandas will attempt to preserve
these index/column names whenever possible. In the case where all inputs share a
common name, this name will be assigned to the result. When the input names do
not all agree, the result will be unnamed. The same is true for MultiIndex,
but the logic is applied separately on a level-by-level basis.
Set logic on the other axes#
When gluing together multiple DataFrames, you have a choice of how to handle
the other axes (other than the one being concatenated). This can be done in
the following two ways:
Take the union of them all, join='outer'. This is the default
option as it results in zero information loss.
Take the intersection, join='inner'.
Here is an example of each of these methods. First, the default join='outer'
behavior:
In [8]: df4 = pd.DataFrame(
...: {
...: "B": ["B2", "B3", "B6", "B7"],
...: "D": ["D2", "D3", "D6", "D7"],
...: "F": ["F2", "F3", "F6", "F7"],
...: },
...: index=[2, 3, 6, 7],
...: )
...:
In [9]: result = pd.concat([df1, df4], axis=1)
Here is the same thing with join='inner':
In [10]: result = pd.concat([df1, df4], axis=1, join="inner")
Lastly, suppose we just wanted to reuse the exact index from the original
DataFrame:
In [11]: result = pd.concat([df1, df4], axis=1).reindex(df1.index)
Similarly, we could index before the concatenation:
In [12]: pd.concat([df1, df4.reindex(df1.index)], axis=1)
Out[12]:
A B C D B D F
0 A0 B0 C0 D0 NaN NaN NaN
1 A1 B1 C1 D1 NaN NaN NaN
2 A2 B2 C2 D2 B2 D2 F2
3 A3 B3 C3 D3 B3 D3 F3
Ignoring indexes on the concatenation axis#
For DataFrame objects which don’t have a meaningful index, you may wish
to append them and ignore the fact that they may have overlapping indexes. To
do this, use the ignore_index argument:
In [13]: result = pd.concat([df1, df4], ignore_index=True, sort=False)
Concatenating with mixed ndims#
You can concatenate a mix of Series and DataFrame objects. The
Series will be transformed to DataFrame with the column name as
the name of the Series.
In [14]: s1 = pd.Series(["X0", "X1", "X2", "X3"], name="X")
In [15]: result = pd.concat([df1, s1], axis=1)
Note
Since we’re concatenating a Series to a DataFrame, we could have
achieved the same result with DataFrame.assign(). To concatenate an
arbitrary number of pandas objects (DataFrame or Series), use
concat.
If unnamed Series are passed they will be numbered consecutively.
In [16]: s2 = pd.Series(["_0", "_1", "_2", "_3"])
In [17]: result = pd.concat([df1, s2, s2, s2], axis=1)
Passing ignore_index=True will drop all name references.
In [18]: result = pd.concat([df1, s1], axis=1, ignore_index=True)
More concatenating with group keys#
A fairly common use of the keys argument is to override the column names
when creating a new DataFrame based on existing Series.
Notice how the default behaviour consists on letting the resulting DataFrame
inherit the parent Series’ name, when these existed.
In [19]: s3 = pd.Series([0, 1, 2, 3], name="foo")
In [20]: s4 = pd.Series([0, 1, 2, 3])
In [21]: s5 = pd.Series([0, 1, 4, 5])
In [22]: pd.concat([s3, s4, s5], axis=1)
Out[22]:
foo 0 1
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 5
Through the keys argument we can override the existing column names.
In [23]: pd.concat([s3, s4, s5], axis=1, keys=["red", "blue", "yellow"])
Out[23]:
red blue yellow
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 5
Let’s consider a variation of the very first example presented:
In [24]: result = pd.concat(frames, keys=["x", "y", "z"])
You can also pass a dict to concat in which case the dict keys will be used
for the keys argument (unless other keys are specified):
In [25]: pieces = {"x": df1, "y": df2, "z": df3}
In [26]: result = pd.concat(pieces)
In [27]: result = pd.concat(pieces, keys=["z", "y"])
The MultiIndex created has levels that are constructed from the passed keys and
the index of the DataFrame pieces:
In [28]: result.index.levels
Out[28]: FrozenList([['z', 'y'], [4, 5, 6, 7, 8, 9, 10, 11]])
If you wish to specify other levels (as will occasionally be the case), you can
do so using the levels argument:
In [29]: result = pd.concat(
....: pieces, keys=["x", "y", "z"], levels=[["z", "y", "x", "w"]], names=["group_key"]
....: )
....:
In [30]: result.index.levels
Out[30]: FrozenList([['z', 'y', 'x', 'w'], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]])
This is fairly esoteric, but it is actually necessary for implementing things
like GroupBy where the order of a categorical variable is meaningful.
Appending rows to a DataFrame#
If you have a series that you want to append as a single row to a DataFrame, you can convert the row into a
DataFrame and use concat
In [31]: s2 = pd.Series(["X0", "X1", "X2", "X3"], index=["A", "B", "C", "D"])
In [32]: result = pd.concat([df1, s2.to_frame().T], ignore_index=True)
You should use ignore_index with this method to instruct DataFrame to
discard its index. If you wish to preserve the index, you should construct an
appropriately-indexed DataFrame and append or concatenate those objects.
Database-style DataFrame or named Series joining/merging#
pandas has full-featured, high performance in-memory join operations
idiomatically very similar to relational databases like SQL. These methods
perform significantly better (in some cases well over an order of magnitude
better) than other open source implementations (like base::merge.data.frame
in R). The reason for this is careful algorithmic design and the internal layout
of the data in DataFrame.
See the cookbook for some advanced strategies.
Users who are familiar with SQL but new to pandas might be interested in a
comparison with SQL.
pandas provides a single function, merge(), as the entry point for
all standard database join operations between DataFrame or named Series objects:
pd.merge(
left,
right,
how="inner",
on=None,
left_on=None,
right_on=None,
left_index=False,
right_index=False,
sort=True,
suffixes=("_x", "_y"),
copy=True,
indicator=False,
validate=None,
)
left: A DataFrame or named Series object.
right: Another DataFrame or named Series object.
on: Column or index level names to join on. Must be found in both the left
and right DataFrame and/or Series objects. If not passed and left_index and
right_index are False, the intersection of the columns in the
DataFrames and/or Series will be inferred to be the join keys.
left_on: Columns or index levels from the left DataFrame or Series to use as
keys. Can either be column names, index level names, or arrays with length
equal to the length of the DataFrame or Series.
right_on: Columns or index levels from the right DataFrame or Series to use as
keys. Can either be column names, index level names, or arrays with length
equal to the length of the DataFrame or Series.
left_index: If True, use the index (row labels) from the left
DataFrame or Series as its join key(s). In the case of a DataFrame or Series with a MultiIndex
(hierarchical), the number of levels must match the number of join keys
from the right DataFrame or Series.
right_index: Same usage as left_index for the right DataFrame or Series
how: One of 'left', 'right', 'outer', 'inner', 'cross'. Defaults
to inner. See below for more detailed description of each method.
sort: Sort the result DataFrame by the join keys in lexicographical
order. Defaults to True, setting to False will improve performance
substantially in many cases.
suffixes: A tuple of string suffixes to apply to overlapping
columns. Defaults to ('_x', '_y').
copy: Always copy data (default True) from the passed DataFrame or named Series
objects, even when reindexing is not necessary. Cannot be avoided in many
cases but may improve performance / memory usage. The cases where copying
can be avoided are somewhat pathological but this option is provided
nonetheless.
indicator: Add a column to the output DataFrame called _merge
with information on the source of each row. _merge is Categorical-type
and takes on a value of left_only for observations whose merge key
only appears in 'left' DataFrame or Series, right_only for observations whose
merge key only appears in 'right' DataFrame or Series, and both if the
observation’s merge key is found in both.
validate : string, default None.
If specified, checks if merge is of specified type.
“one_to_one” or “1:1”: checks if merge keys are unique in both
left and right datasets.
“one_to_many” or “1:m”: checks if merge keys are unique in left
dataset.
“many_to_one” or “m:1”: checks if merge keys are unique in right
dataset.
“many_to_many” or “m:m”: allowed, but does not result in checks.
Note
Support for specifying index levels as the on, left_on, and
right_on parameters was added in version 0.23.0.
Support for merging named Series objects was added in version 0.24.0.
The return type will be the same as left. If left is a DataFrame or named Series
and right is a subclass of DataFrame, the return type will still be DataFrame.
merge is a function in the pandas namespace, and it is also available as a
DataFrame instance method merge(), with the calling
DataFrame being implicitly considered the left object in the join.
The related join() method, uses merge internally for the
index-on-index (by default) and column(s)-on-index join. If you are joining on
index only, you may wish to use DataFrame.join to save yourself some typing.
Brief primer on merge methods (relational algebra)#
Experienced users of relational databases like SQL will be familiar with the
terminology used to describe join operations between two SQL-table like
structures (DataFrame objects). There are several cases to consider which
are very important to understand:
one-to-one joins: for example when joining two DataFrame objects on
their indexes (which must contain unique values).
many-to-one joins: for example when joining an index (unique) to one or
more columns in a different DataFrame.
many-to-many joins: joining columns on columns.
Note
When joining columns on columns (potentially a many-to-many join), any
indexes on the passed DataFrame objects will be discarded.
It is worth spending some time understanding the result of the many-to-many
join case. In SQL / standard relational algebra, if a key combination appears
more than once in both tables, the resulting table will have the Cartesian
product of the associated data. Here is a very basic example with one unique
key combination:
In [33]: left = pd.DataFrame(
....: {
....: "key": ["K0", "K1", "K2", "K3"],
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: }
....: )
....:
In [34]: right = pd.DataFrame(
....: {
....: "key": ["K0", "K1", "K2", "K3"],
....: "C": ["C0", "C1", "C2", "C3"],
....: "D": ["D0", "D1", "D2", "D3"],
....: }
....: )
....:
In [35]: result = pd.merge(left, right, on="key")
Here is a more complicated example with multiple join keys. Only the keys
appearing in left and right are present (the intersection), since
how='inner' by default.
In [36]: left = pd.DataFrame(
....: {
....: "key1": ["K0", "K0", "K1", "K2"],
....: "key2": ["K0", "K1", "K0", "K1"],
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: }
....: )
....:
In [37]: right = pd.DataFrame(
....: {
....: "key1": ["K0", "K1", "K1", "K2"],
....: "key2": ["K0", "K0", "K0", "K0"],
....: "C": ["C0", "C1", "C2", "C3"],
....: "D": ["D0", "D1", "D2", "D3"],
....: }
....: )
....:
In [38]: result = pd.merge(left, right, on=["key1", "key2"])
The how argument to merge specifies how to determine which keys are to
be included in the resulting table. If a key combination does not appear in
either the left or right tables, the values in the joined table will be
NA. Here is a summary of the how options and their SQL equivalent names:
Merge method
SQL Join Name
Description
left
LEFT OUTER JOIN
Use keys from left frame only
right
RIGHT OUTER JOIN
Use keys from right frame only
outer
FULL OUTER JOIN
Use union of keys from both frames
inner
INNER JOIN
Use intersection of keys from both frames
cross
CROSS JOIN
Create the cartesian product of rows of both frames
In [39]: result = pd.merge(left, right, how="left", on=["key1", "key2"])
In [40]: result = pd.merge(left, right, how="right", on=["key1", "key2"])
In [41]: result = pd.merge(left, right, how="outer", on=["key1", "key2"])
In [42]: result = pd.merge(left, right, how="inner", on=["key1", "key2"])
In [43]: result = pd.merge(left, right, how="cross")
You can merge a mult-indexed Series and a DataFrame, if the names of
the MultiIndex correspond to the columns from the DataFrame. Transform
the Series to a DataFrame using Series.reset_index() before merging,
as shown in the following example.
In [44]: df = pd.DataFrame({"Let": ["A", "B", "C"], "Num": [1, 2, 3]})
In [45]: df
Out[45]:
Let Num
0 A 1
1 B 2
2 C 3
In [46]: ser = pd.Series(
....: ["a", "b", "c", "d", "e", "f"],
....: index=pd.MultiIndex.from_arrays(
....: [["A", "B", "C"] * 2, [1, 2, 3, 4, 5, 6]], names=["Let", "Num"]
....: ),
....: )
....:
In [47]: ser
Out[47]:
Let Num
A 1 a
B 2 b
C 3 c
A 4 d
B 5 e
C 6 f
dtype: object
In [48]: pd.merge(df, ser.reset_index(), on=["Let", "Num"])
Out[48]:
Let Num 0
0 A 1 a
1 B 2 b
2 C 3 c
Here is another example with duplicate join keys in DataFrames:
In [49]: left = pd.DataFrame({"A": [1, 2], "B": [2, 2]})
In [50]: right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
In [51]: result = pd.merge(left, right, on="B", how="outer")
Warning
Joining / merging on duplicate keys can cause a returned frame that is the multiplication of the row dimensions, which may result in memory overflow. It is the user’ s responsibility to manage duplicate values in keys before joining large DataFrames.
Checking for duplicate keys#
Users can use the validate argument to automatically check whether there
are unexpected duplicates in their merge keys. Key uniqueness is checked before
merge operations and so should protect against memory overflows. Checking key
uniqueness is also a good way to ensure user data structures are as expected.
In the following example, there are duplicate values of B in the right
DataFrame. As this is not a one-to-one merge – as specified in the
validate argument – an exception will be raised.
In [52]: left = pd.DataFrame({"A": [1, 2], "B": [1, 2]})
In [53]: right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
In [53]: result = pd.merge(left, right, on="B", how="outer", validate="one_to_one")
...
MergeError: Merge keys are not unique in right dataset; not a one-to-one merge
If the user is aware of the duplicates in the right DataFrame but wants to
ensure there are no duplicates in the left DataFrame, one can use the
validate='one_to_many' argument instead, which will not raise an exception.
In [54]: pd.merge(left, right, on="B", how="outer", validate="one_to_many")
Out[54]:
A_x B A_y
0 1 1 NaN
1 2 2 4.0
2 2 2 5.0
3 2 2 6.0
The merge indicator#
merge() accepts the argument indicator. If True, a
Categorical-type column called _merge will be added to the output object
that takes on values:
Observation Origin
_merge value
Merge key only in 'left' frame
left_only
Merge key only in 'right' frame
right_only
Merge key in both frames
both
In [55]: df1 = pd.DataFrame({"col1": [0, 1], "col_left": ["a", "b"]})
In [56]: df2 = pd.DataFrame({"col1": [1, 2, 2], "col_right": [2, 2, 2]})
In [57]: pd.merge(df1, df2, on="col1", how="outer", indicator=True)
Out[57]:
col1 col_left col_right _merge
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only
The indicator argument will also accept string arguments, in which case the indicator function will use the value of the passed string as the name for the indicator column.
In [58]: pd.merge(df1, df2, on="col1", how="outer", indicator="indicator_column")
Out[58]:
col1 col_left col_right indicator_column
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only
Merge dtypes#
Merging will preserve the dtype of the join keys.
In [59]: left = pd.DataFrame({"key": [1], "v1": [10]})
In [60]: left
Out[60]:
key v1
0 1 10
In [61]: right = pd.DataFrame({"key": [1, 2], "v1": [20, 30]})
In [62]: right
Out[62]:
key v1
0 1 20
1 2 30
We are able to preserve the join keys:
In [63]: pd.merge(left, right, how="outer")
Out[63]:
key v1
0 1 10
1 1 20
2 2 30
In [64]: pd.merge(left, right, how="outer").dtypes
Out[64]:
key int64
v1 int64
dtype: object
Of course if you have missing values that are introduced, then the
resulting dtype will be upcast.
In [65]: pd.merge(left, right, how="outer", on="key")
Out[65]:
key v1_x v1_y
0 1 10.0 20
1 2 NaN 30
In [66]: pd.merge(left, right, how="outer", on="key").dtypes
Out[66]:
key int64
v1_x float64
v1_y int64
dtype: object
Merging will preserve category dtypes of the mergands. See also the section on categoricals.
The left frame.
In [67]: from pandas.api.types import CategoricalDtype
In [68]: X = pd.Series(np.random.choice(["foo", "bar"], size=(10,)))
In [69]: X = X.astype(CategoricalDtype(categories=["foo", "bar"]))
In [70]: left = pd.DataFrame(
....: {"X": X, "Y": np.random.choice(["one", "two", "three"], size=(10,))}
....: )
....:
In [71]: left
Out[71]:
X Y
0 bar one
1 foo one
2 foo three
3 bar three
4 foo one
5 bar one
6 bar three
7 bar three
8 bar three
9 foo three
In [72]: left.dtypes
Out[72]:
X category
Y object
dtype: object
The right frame.
In [73]: right = pd.DataFrame(
....: {
....: "X": pd.Series(["foo", "bar"], dtype=CategoricalDtype(["foo", "bar"])),
....: "Z": [1, 2],
....: }
....: )
....:
In [74]: right
Out[74]:
X Z
0 foo 1
1 bar 2
In [75]: right.dtypes
Out[75]:
X category
Z int64
dtype: object
The merged result:
In [76]: result = pd.merge(left, right, how="outer")
In [77]: result
Out[77]:
X Y Z
0 bar one 2
1 bar three 2
2 bar one 2
3 bar three 2
4 bar three 2
5 bar three 2
6 foo one 1
7 foo three 1
8 foo one 1
9 foo three 1
In [78]: result.dtypes
Out[78]:
X category
Y object
Z int64
dtype: object
Note
The category dtypes must be exactly the same, meaning the same categories and the ordered attribute.
Otherwise the result will coerce to the categories’ dtype.
Note
Merging on category dtypes that are the same can be quite performant compared to object dtype merging.
Joining on index#
DataFrame.join() is a convenient method for combining the columns of two
potentially differently-indexed DataFrames into a single result
DataFrame. Here is a very basic example:
In [79]: left = pd.DataFrame(
....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}, index=["K0", "K1", "K2"]
....: )
....:
In [80]: right = pd.DataFrame(
....: {"C": ["C0", "C2", "C3"], "D": ["D0", "D2", "D3"]}, index=["K0", "K2", "K3"]
....: )
....:
In [81]: result = left.join(right)
In [82]: result = left.join(right, how="outer")
The same as above, but with how='inner'.
In [83]: result = left.join(right, how="inner")
The data alignment here is on the indexes (row labels). This same behavior can
be achieved using merge plus additional arguments instructing it to use the
indexes:
In [84]: result = pd.merge(left, right, left_index=True, right_index=True, how="outer")
In [85]: result = pd.merge(left, right, left_index=True, right_index=True, how="inner")
Joining key columns on an index#
join() takes an optional on argument which may be a column
or multiple column names, which specifies that the passed DataFrame is to be
aligned on that column in the DataFrame. These two function calls are
completely equivalent:
left.join(right, on=key_or_keys)
pd.merge(
left, right, left_on=key_or_keys, right_index=True, how="left", sort=False
)
Obviously you can choose whichever form you find more convenient. For
many-to-one joins (where one of the DataFrame’s is already indexed by the
join key), using join may be more convenient. Here is a simple example:
In [86]: left = pd.DataFrame(
....: {
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: "key": ["K0", "K1", "K0", "K1"],
....: }
....: )
....:
In [87]: right = pd.DataFrame({"C": ["C0", "C1"], "D": ["D0", "D1"]}, index=["K0", "K1"])
In [88]: result = left.join(right, on="key")
In [89]: result = pd.merge(
....: left, right, left_on="key", right_index=True, how="left", sort=False
....: )
....:
To join on multiple keys, the passed DataFrame must have a MultiIndex:
In [90]: left = pd.DataFrame(
....: {
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: "key1": ["K0", "K0", "K1", "K2"],
....: "key2": ["K0", "K1", "K0", "K1"],
....: }
....: )
....:
In [91]: index = pd.MultiIndex.from_tuples(
....: [("K0", "K0"), ("K1", "K0"), ("K2", "K0"), ("K2", "K1")]
....: )
....:
In [92]: right = pd.DataFrame(
....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]}, index=index
....: )
....:
Now this can be joined by passing the two key column names:
In [93]: result = left.join(right, on=["key1", "key2"])
The default for DataFrame.join is to perform a left join (essentially a
“VLOOKUP” operation, for Excel users), which uses only the keys found in the
calling DataFrame. Other join types, for example inner join, can be just as
easily performed:
In [94]: result = left.join(right, on=["key1", "key2"], how="inner")
As you can see, this drops any rows where there was no match.
Joining a single Index to a MultiIndex#
You can join a singly-indexed DataFrame with a level of a MultiIndexed DataFrame.
The level will match on the name of the index of the singly-indexed frame against
a level name of the MultiIndexed frame.
In [95]: left = pd.DataFrame(
....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]},
....: index=pd.Index(["K0", "K1", "K2"], name="key"),
....: )
....:
In [96]: index = pd.MultiIndex.from_tuples(
....: [("K0", "Y0"), ("K1", "Y1"), ("K2", "Y2"), ("K2", "Y3")],
....: names=["key", "Y"],
....: )
....:
In [97]: right = pd.DataFrame(
....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]},
....: index=index,
....: )
....:
In [98]: result = left.join(right, how="inner")
This is equivalent but less verbose and more memory efficient / faster than this.
In [99]: result = pd.merge(
....: left.reset_index(), right.reset_index(), on=["key"], how="inner"
....: ).set_index(["key","Y"])
....:
Joining with two MultiIndexes#
This is supported in a limited way, provided that the index for the right
argument is completely used in the join, and is a subset of the indices in
the left argument, as in this example:
In [100]: leftindex = pd.MultiIndex.from_product(
.....: [list("abc"), list("xy"), [1, 2]], names=["abc", "xy", "num"]
.....: )
.....:
In [101]: left = pd.DataFrame({"v1": range(12)}, index=leftindex)
In [102]: left
Out[102]:
v1
abc xy num
a x 1 0
2 1
y 1 2
2 3
b x 1 4
2 5
y 1 6
2 7
c x 1 8
2 9
y 1 10
2 11
In [103]: rightindex = pd.MultiIndex.from_product(
.....: [list("abc"), list("xy")], names=["abc", "xy"]
.....: )
.....:
In [104]: right = pd.DataFrame({"v2": [100 * i for i in range(1, 7)]}, index=rightindex)
In [105]: right
Out[105]:
v2
abc xy
a x 100
y 200
b x 300
y 400
c x 500
y 600
In [106]: left.join(right, on=["abc", "xy"], how="inner")
Out[106]:
v1 v2
abc xy num
a x 1 0 100
2 1 100
y 1 2 200
2 3 200
b x 1 4 300
2 5 300
y 1 6 400
2 7 400
c x 1 8 500
2 9 500
y 1 10 600
2 11 600
If that condition is not satisfied, a join with two multi-indexes can be
done using the following code.
In [107]: leftindex = pd.MultiIndex.from_tuples(
.....: [("K0", "X0"), ("K0", "X1"), ("K1", "X2")], names=["key", "X"]
.....: )
.....:
In [108]: left = pd.DataFrame(
.....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}, index=leftindex
.....: )
.....:
In [109]: rightindex = pd.MultiIndex.from_tuples(
.....: [("K0", "Y0"), ("K1", "Y1"), ("K2", "Y2"), ("K2", "Y3")], names=["key", "Y"]
.....: )
.....:
In [110]: right = pd.DataFrame(
.....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]}, index=rightindex
.....: )
.....:
In [111]: result = pd.merge(
.....: left.reset_index(), right.reset_index(), on=["key"], how="inner"
.....: ).set_index(["key", "X", "Y"])
.....:
Merging on a combination of columns and index levels#
Strings passed as the on, left_on, and right_on parameters
may refer to either column names or index level names. This enables merging
DataFrame instances on a combination of index levels and columns without
resetting indexes.
In [112]: left_index = pd.Index(["K0", "K0", "K1", "K2"], name="key1")
In [113]: left = pd.DataFrame(
.....: {
.....: "A": ["A0", "A1", "A2", "A3"],
.....: "B": ["B0", "B1", "B2", "B3"],
.....: "key2": ["K0", "K1", "K0", "K1"],
.....: },
.....: index=left_index,
.....: )
.....:
In [114]: right_index = pd.Index(["K0", "K1", "K2", "K2"], name="key1")
In [115]: right = pd.DataFrame(
.....: {
.....: "C": ["C0", "C1", "C2", "C3"],
.....: "D": ["D0", "D1", "D2", "D3"],
.....: "key2": ["K0", "K0", "K0", "K1"],
.....: },
.....: index=right_index,
.....: )
.....:
In [116]: result = left.merge(right, on=["key1", "key2"])
Note
When DataFrames are merged on a string that matches an index level in both
frames, the index level is preserved as an index level in the resulting
DataFrame.
Note
When DataFrames are merged using only some of the levels of a MultiIndex,
the extra levels will be dropped from the resulting merge. In order to
preserve those levels, use reset_index on those level names to move
those levels to columns prior to doing the merge.
Note
If a string matches both a column name and an index level name, then a
warning is issued and the column takes precedence. This will result in an
ambiguity error in a future version.
Overlapping value columns#
The merge suffixes argument takes a tuple of list of strings to append to
overlapping column names in the input DataFrames to disambiguate the result
columns:
In [117]: left = pd.DataFrame({"k": ["K0", "K1", "K2"], "v": [1, 2, 3]})
In [118]: right = pd.DataFrame({"k": ["K0", "K0", "K3"], "v": [4, 5, 6]})
In [119]: result = pd.merge(left, right, on="k")
In [120]: result = pd.merge(left, right, on="k", suffixes=("_l", "_r"))
DataFrame.join() has lsuffix and rsuffix arguments which behave
similarly.
In [121]: left = left.set_index("k")
In [122]: right = right.set_index("k")
In [123]: result = left.join(right, lsuffix="_l", rsuffix="_r")
Joining multiple DataFrames#
A list or tuple of DataFrames can also be passed to join()
to join them together on their indexes.
In [124]: right2 = pd.DataFrame({"v": [7, 8, 9]}, index=["K1", "K1", "K2"])
In [125]: result = left.join([right, right2])
Merging together values within Series or DataFrame columns#
Another fairly common situation is to have two like-indexed (or similarly
indexed) Series or DataFrame objects and wanting to “patch” values in
one object from values for matching indices in the other. Here is an example:
In [126]: df1 = pd.DataFrame(
.....: [[np.nan, 3.0, 5.0], [-4.6, np.nan, np.nan], [np.nan, 7.0, np.nan]]
.....: )
.....:
In [127]: df2 = pd.DataFrame([[-42.6, np.nan, -8.2], [-5.0, 1.6, 4]], index=[1, 2])
For this, use the combine_first() method:
In [128]: result = df1.combine_first(df2)
Note that this method only takes values from the right DataFrame if they are
missing in the left DataFrame. A related method, update(),
alters non-NA values in place:
In [129]: df1.update(df2)
Timeseries friendly merging#
Merging ordered data#
A merge_ordered() function allows combining time series and other
ordered data. In particular it has an optional fill_method keyword to
fill/interpolate missing data:
In [130]: left = pd.DataFrame(
.....: {"k": ["K0", "K1", "K1", "K2"], "lv": [1, 2, 3, 4], "s": ["a", "b", "c", "d"]}
.....: )
.....:
In [131]: right = pd.DataFrame({"k": ["K1", "K2", "K4"], "rv": [1, 2, 3]})
In [132]: pd.merge_ordered(left, right, fill_method="ffill", left_by="s")
Out[132]:
k lv s rv
0 K0 1.0 a NaN
1 K1 1.0 a 1.0
2 K2 1.0 a 2.0
3 K4 1.0 a 3.0
4 K1 2.0 b 1.0
5 K2 2.0 b 2.0
6 K4 2.0 b 3.0
7 K1 3.0 c 1.0
8 K2 3.0 c 2.0
9 K4 3.0 c 3.0
10 K1 NaN d 1.0
11 K2 4.0 d 2.0
12 K4 4.0 d 3.0
Merging asof#
A merge_asof() is similar to an ordered left-join except that we match on
nearest key rather than equal keys. For each row in the left DataFrame,
we select the last row in the right DataFrame whose on key is less
than the left’s key. Both DataFrames must be sorted by the key.
Optionally an asof merge can perform a group-wise merge. This matches the
by key equally, in addition to the nearest match on the on key.
For example; we might have trades and quotes and we want to asof
merge them.
In [133]: trades = pd.DataFrame(
.....: {
.....: "time": pd.to_datetime(
.....: [
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.038",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.048",
.....: ]
.....: ),
.....: "ticker": ["MSFT", "MSFT", "GOOG", "GOOG", "AAPL"],
.....: "price": [51.95, 51.95, 720.77, 720.92, 98.00],
.....: "quantity": [75, 155, 100, 100, 100],
.....: },
.....: columns=["time", "ticker", "price", "quantity"],
.....: )
.....:
In [134]: quotes = pd.DataFrame(
.....: {
.....: "time": pd.to_datetime(
.....: [
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.030",
.....: "20160525 13:30:00.041",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.049",
.....: "20160525 13:30:00.072",
.....: "20160525 13:30:00.075",
.....: ]
.....: ),
.....: "ticker": ["GOOG", "MSFT", "MSFT", "MSFT", "GOOG", "AAPL", "GOOG", "MSFT"],
.....: "bid": [720.50, 51.95, 51.97, 51.99, 720.50, 97.99, 720.50, 52.01],
.....: "ask": [720.93, 51.96, 51.98, 52.00, 720.93, 98.01, 720.88, 52.03],
.....: },
.....: columns=["time", "ticker", "bid", "ask"],
.....: )
.....:
In [135]: trades
Out[135]:
time ticker price quantity
0 2016-05-25 13:30:00.023 MSFT 51.95 75
1 2016-05-25 13:30:00.038 MSFT 51.95 155
2 2016-05-25 13:30:00.048 GOOG 720.77 100
3 2016-05-25 13:30:00.048 GOOG 720.92 100
4 2016-05-25 13:30:00.048 AAPL 98.00 100
In [136]: quotes
Out[136]:
time ticker bid ask
0 2016-05-25 13:30:00.023 GOOG 720.50 720.93
1 2016-05-25 13:30:00.023 MSFT 51.95 51.96
2 2016-05-25 13:30:00.030 MSFT 51.97 51.98
3 2016-05-25 13:30:00.041 MSFT 51.99 52.00
4 2016-05-25 13:30:00.048 GOOG 720.50 720.93
5 2016-05-25 13:30:00.049 AAPL 97.99 98.01
6 2016-05-25 13:30:00.072 GOOG 720.50 720.88
7 2016-05-25 13:30:00.075 MSFT 52.01 52.03
By default we are taking the asof of the quotes.
In [137]: pd.merge_asof(trades, quotes, on="time", by="ticker")
Out[137]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 2ms between the quote time and the trade time.
In [138]: pd.merge_asof(trades, quotes, on="time", by="ticker", tolerance=pd.Timedelta("2ms"))
Out[138]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 NaN NaN
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 10ms between the quote time and the trade time and we
exclude exact matches on time. Note that though we exclude the exact matches
(of the quotes), prior quotes do propagate to that point in time.
In [139]: pd.merge_asof(
.....: trades,
.....: quotes,
.....: on="time",
.....: by="ticker",
.....: tolerance=pd.Timedelta("10ms"),
.....: allow_exact_matches=False,
.....: )
.....:
Out[139]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 NaN NaN
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 NaN NaN
3 2016-05-25 13:30:00.048 GOOG 720.92 100 NaN NaN
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
Comparing objects#
The compare() and compare() methods allow you to
compare two DataFrame or Series, respectively, and summarize their differences.
This feature was added in V1.1.0.
For example, you might want to compare two DataFrame and stack their differences
side by side.
In [140]: df = pd.DataFrame(
.....: {
.....: "col1": ["a", "a", "b", "b", "a"],
.....: "col2": [1.0, 2.0, 3.0, np.nan, 5.0],
.....: "col3": [1.0, 2.0, 3.0, 4.0, 5.0],
.....: },
.....: columns=["col1", "col2", "col3"],
.....: )
.....:
In [141]: df
Out[141]:
col1 col2 col3
0 a 1.0 1.0
1 a 2.0 2.0
2 b 3.0 3.0
3 b NaN 4.0
4 a 5.0 5.0
In [142]: df2 = df.copy()
In [143]: df2.loc[0, "col1"] = "c"
In [144]: df2.loc[2, "col3"] = 4.0
In [145]: df2
Out[145]:
col1 col2 col3
0 c 1.0 1.0
1 a 2.0 2.0
2 b 3.0 4.0
3 b NaN 4.0
4 a 5.0 5.0
In [146]: df.compare(df2)
Out[146]:
col1 col3
self other self other
0 a c NaN NaN
2 NaN NaN 3.0 4.0
By default, if two corresponding values are equal, they will be shown as NaN.
Furthermore, if all values in an entire row / column, the row / column will be
omitted from the result. The remaining differences will be aligned on columns.
If you wish, you may choose to stack the differences on rows.
In [147]: df.compare(df2, align_axis=0)
Out[147]:
col1 col3
0 self a NaN
other c NaN
2 self NaN 3.0
other NaN 4.0
If you wish to keep all original rows and columns, set keep_shape argument
to True.
In [148]: df.compare(df2, keep_shape=True)
Out[148]:
col1 col2 col3
self other self other self other
0 a c NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN 3.0 4.0
3 NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN
You may also keep all the original values even if they are equal.
In [149]: df.compare(df2, keep_shape=True, keep_equal=True)
Out[149]:
col1 col2 col3
self other self other self other
0 a c 1.0 1.0 1.0 1.0
1 a a 2.0 2.0 2.0 2.0
2 b b 3.0 3.0 3.0 4.0
3 b b NaN NaN 4.0 4.0
4 a a 5.0 5.0 5.0 5.0
| 40 | 1,193 | Time series data merge nearest right dataset has multiple same values
I have two dataframes. The first is like a log while the second is like inputs. I want to combine this log and inputs based on their time columns.
I tried using merge_asof but it only takes one input into the input dateframe.
Here is an example. Dataframe Log Times, log:
STARTTIME_Log
2020-05-28 21:57:27.000000
2020-05-28 06:35:20.000000
2020-05-28 19:51:39.000000
2020-05-28 20:43:23.000000
DataFrame İnput Times and Values, input:
IO_Time IOName value
2020-05-28 21:57:35.037 A 79.65
2020-05-28 21:57:35.037 B 33.33
2020-05-28 06:35:22.037 A 27.53
2020-05-28 06:35:22.037 B 6.23
2020-05-28 09:30:20.037 A 43.50
2020-05-28 09:30:20.037 B 15.23
2020-05-28 19:51:40.037 A 100.00
2020-05-28 19:51:40.037 B 12.52
2020-05-28 20:43:25.037 A 56.43
2020-05-28 20:43:25.037 B 2.67
2020-05-28 22:32:56.037 A 23.45
2020-05-28 22:32:56.037 B 3.55
Expected Output:
STARTTIME_Log IOName value
2020-05-28 21:57:27.000000 A 79.65
2020-05-28 21:57:27.000000 B 33.33
2020-05-28 06:35:20.000000 A 27.53
2020-05-28 06:35:20.000000 B 6.23
2020-05-28 19:51:39.000000 A 100.00
2020-05-28 19:51:39.000000 B 12.52
2020-05-28 20:43:23.000000 A 56.43
2020-05-28 20:43:23.000000 B 2.67
The output merges the log and input dataframes in the nearest time.
The merge is done on STARTTIME_Log for the log dataframe and IO_Time on input.
If there is too large a difference then the rows are dropped.
How can I do that? | First, make sure that the IO_Time and STARTTIME_Log columns are of datetime type and are sorted (required to use merge_asof:
log['STARTTIME_Log'] = pd.to_datetime(log['STARTTIME_Log'])
input['IO_Time'] = pd.to_datetime(input['IO_Time'])
log = log.sort_values('STARTTIME_Log')
input = input.sort_values('IO_Time')
Now, use merge_asof with input as the left dataframe and log as the right. Note that you need to specify an acceptable tolerance value (I set it to 10 seconds here):
tol = pd.Timedelta('10s')
df = pd.merge_asof(input, log, left_on='IO_Time', right_on='STARTTIME_Log', tolerance=tol, direction='nearest')
df = df.dropna(subset=['STARTTIME_Log']).drop(columns='IO_Time')
Afterwards, the rows that don't have a match in log are dropped and the IO_Time column is removed.
Result:
IOName value STARTTIME_Log
0 A 27.53 2020-05-28 06:35:20
1 B 6.23 2020-05-28 06:35:20
4 A 100.00 2020-05-28 19:51:39
5 B 12.52 2020-05-28 19:51:39
6 A 56.43 2020-05-28 20:43:23
7 B 2.67 2020-05-28 20:43:23
8 A 79.65 2020-05-28 21:57:27
9 B 33.33 2020-05-28 21:57:27
|
|
66,867,941 | Getting an error when checking if values in a list match a column PANDAS | <p>I'm just wondering how one might overcome the below error.</p>
<p><strong>AttributeError: 'list' object has no attribute 'str'</strong></p>
<p>What I am trying to do is create a new column "PrivilegedAccess" and in this column I want to write "True" if any of the names in the first_names column match the ones outlined in the "Search_for_These_values" list and "False" if they don't</p>
<p>Code</p>
<pre><code>## Create list of Privileged accounts
Search_for_These_values = ['Privileged','Diagnostics','SYS','service account'] #creating list
pattern = '|'.join(Search_for_These_values) # joining list for comparision
PrivilegedAccounts_DF['PrivilegedAccess'] = PrivilegedAccounts_DF.columns=[['first_name']].str.contains(pattern)
PrivilegedAccounts_DF['PrivilegedAccess'] = PrivilegedAccounts_DF['PrivilegedAccess'].map({True: 'True', False: 'False'})
</code></pre>
<p>SAMPLE DATA:</p>
<pre><code> uid last_name first_name language role email_address department
0 121 Chad Diagnostics English Team Lead [email protected] Data Scientist
1 253 Montegu Paulo Spanish CIO [email protected] Marketing
2 545 Mitchel Susan English Team Lead [email protected] Data Scientist
3 555 Vuvko Matia Polish Marketing Lead [email protected] Marketing
4 568 Sisk Ivan English Supply Chain Lead [email protected] Supply Chain
5 475 Andrea Patrice Spanish Sales Graduate [email protected] Sales
6 365 Akkinapalli Cherifa French Supply Chain Assistance [email protected] Supply Chain
</code></pre>
<p>Note that the dtype of the first_name column is "object" and the dataframe is multi index (not sure how to change from multi index)</p>
<p>Many thanks</p> | 66,867,973 | 2021-03-30T09:09:32.117000 | 2 | null | 1 | 44 | python|pandas | <p>It seems you need select one column for <code>str.contains</code> and then use map or convert boolean to strings:</p>
<pre><code>Search_for_These_values = ['Privileged','Diagnostics','SYS','service account'] #creating list
pattern = '|'.join(Search_for_These_values)
PrivilegedAccounts_DF = pd.DataFrame({'first_name':['Privileged 111',
'aaa SYS',
'sss']})
print (PrivilegedAccounts_DF.columns)
Index(['first_name'], dtype='object')
print (PrivilegedAccounts_DF.loc[0, 'first_name'])
Privileged 111
print (type(PrivilegedAccounts_DF.loc[0, 'first_name']))
<class 'str'>
</code></pre>
<hr />
<pre><code>PrivilegedAccounts_DF['PrivilegedAccess'] = PrivilegedAccounts_DF['first_name'].str.contains(pattern).astype(str)
print (PrivilegedAccounts_DF)
first_name PrivilegedAccess
0 Privileged 111 True
1 aaa SYS True
2 sss False
</code></pre>
<p>EDIT:</p>
<p>There is problem one level MultiIndex, need:</p>
<pre><code>PrivilegedAccounts_DF = pd.DataFrame({'first_name':['Privileged 111',
'aaa SYS',
'sss']})
#simulate problem
PrivilegedAccounts_DF.columns = [PrivilegedAccounts_DF.columns.tolist()]
print (PrivilegedAccounts_DF)
first_name
0 Privileged 111
1 aaa SYS
2 sss
#check columns
print (PrivilegedAccounts_DF.columns)
MultiIndex([('first_name',)],
)
</code></pre>
<p>Solution is join values, e.g. by empty string:</p>
<pre><code>PrivilegedAccounts_DF.columns = PrivilegedAccounts_DF.columns.map(''.join)
</code></pre>
<p>So now columns names are correct:</p>
<pre><code>print (PrivilegedAccounts_DF.columns)
Index(['first_name'], dtype='object')
PrivilegedAccounts_DF['PrivilegedAccess'] = PrivilegedAccounts_DF['first_name'].str.contains(pattern).astype(str)
print (PrivilegedAccounts_DF)
</code></pre> | 2021-03-30T09:11:26.167000 | 0 | https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.isin.html | It seems you need select one column for str.contains and then use map or convert boolean to strings:
Search_for_These_values = ['Privileged','Diagnostics','SYS','service account'] #creating list
pattern = '|'.join(Search_for_These_values)
PrivilegedAccounts_DF = pd.DataFrame({'first_name':['Privileged 111',
'aaa SYS',
'sss']})
print (PrivilegedAccounts_DF.columns)
Index(['first_name'], dtype='object')
print (PrivilegedAccounts_DF.loc[0, 'first_name'])
Privileged 111
print (type(PrivilegedAccounts_DF.loc[0, 'first_name']))
<class 'str'>
PrivilegedAccounts_DF['PrivilegedAccess'] = PrivilegedAccounts_DF['first_name'].str.contains(pattern).astype(str)
print (PrivilegedAccounts_DF)
first_name PrivilegedAccess
0 Privileged 111 True
1 aaa SYS True
2 sss False
EDIT:
There is problem one level MultiIndex, need:
PrivilegedAccounts_DF = pd.DataFrame({'first_name':['Privileged 111',
'aaa SYS',
'sss']})
#simulate problem
PrivilegedAccounts_DF.columns = [PrivilegedAccounts_DF.columns.tolist()]
print (PrivilegedAccounts_DF)
first_name
0 Privileged 111
1 aaa SYS
2 sss
#check columns
print (PrivilegedAccounts_DF.columns)
MultiIndex([('first_name',)],
)
Solution is join values, e.g. by empty string:
PrivilegedAccounts_DF.columns = PrivilegedAccounts_DF.columns.map(''.join)
So now columns names are correct:
print (PrivilegedAccounts_DF.columns)
Index(['first_name'], dtype='object')
PrivilegedAccounts_DF['PrivilegedAccess'] = PrivilegedAccounts_DF['first_name'].str.contains(pattern).astype(str)
print (PrivilegedAccounts_DF)
| 0 | 1,851 | Getting an error when checking if values in a list match a column PANDAS
I'm just wondering how one might overcome the below error.
AttributeError: 'list' object has no attribute 'str'
What I am trying to do is create a new column "PrivilegedAccess" and in this column I want to write "True" if any of the names in the first_names column match the ones outlined in the "Search_for_These_values" list and "False" if they don't
Code
## Create list of Privileged accounts
Search_for_These_values = ['Privileged','Diagnostics','SYS','service account'] #creating list
pattern = '|'.join(Search_for_These_values) # joining list for comparision
PrivilegedAccounts_DF['PrivilegedAccess'] = PrivilegedAccounts_DF.columns=[['first_name']].str.contains(pattern)
PrivilegedAccounts_DF['PrivilegedAccess'] = PrivilegedAccounts_DF['PrivilegedAccess'].map({True: 'True', False: 'False'})
SAMPLE DATA:
uid last_name first_name language role email_address department
0 121 Chad Diagnostics English Team Lead [email protected] Data Scientist
1 253 Montegu Paulo Spanish CIO [email protected] Marketing
2 545 Mitchel Susan English Team Lead [email protected] Data Scientist
3 555 Vuvko Matia Polish Marketing Lead [email protected] Marketing
4 568 Sisk Ivan English Supply Chain Lead [email protected] Supply Chain
5 475 Andrea Patrice Spanish Sales Graduate [email protected] Sales
6 365 Akkinapalli Cherifa French Supply Chain Assistance [email protected] Supply Chain
Note that the dtype of the first_name column is "object" and the dataframe is multi index (not sure how to change from multi index)
Many thanks | It seems you need select one column for str.contains and then use map or convert boolean to strings:
Search_for_These_values = ['Privileged','Diagnostics','SYS','service account'] #creating list
pattern = '|'.join(Search_for_These_values)
PrivilegedAccounts_DF = pd.DataFrame({'first_name':['Privileged 111',
'aaa SYS',
'sss']})
print (PrivilegedAccounts_DF.columns)
Index(['first_name'], dtype='object')
print (PrivilegedAccounts_DF.loc[0, 'first_name'])
Privileged 111
print (type(PrivilegedAccounts_DF.loc[0, 'first_name']))
<class 'str'>
PrivilegedAccounts_DF['PrivilegedAccess'] = PrivilegedAccounts_DF['first_name'].str.contains(pattern).astype(str)
print (PrivilegedAccounts_DF)
first_name PrivilegedAccess
0 Privileged 111 True
1 aaa SYS True
2 sss False
EDIT:
There is problem one level MultiIndex, need:
PrivilegedAccounts_DF = pd.DataFrame({'first_name':['Privileged 111',
'aaa SYS',
'sss']})
#simulate problem
PrivilegedAccounts_DF.columns = [PrivilegedAccounts_DF.columns.tolist()]
print (PrivilegedAccounts_DF)
first_name
0 Privileged 111
1 aaa SYS
2 sss
#check columns
print (PrivilegedAccounts_DF.columns)
MultiIndex([('first_name',)],
)
Solution is join values, e.g. by empty string:
PrivilegedAccounts_DF.columns = PrivilegedAccounts_DF.columns.map(''.join)
So now columns names are correct:
print (PrivilegedAccounts_DF.columns)
Index(['first_name'], dtype='object')
PrivilegedAccounts_DF['PrivilegedAccess'] = PrivilegedAccounts_DF['first_name'].str.contains(pattern).astype(str)
print (PrivilegedAccounts_DF)
|
|
59,822,568 | How i can change invalid string pattern with default string in dataframe? | <p>i have a dataframe like below.</p>
<pre><code>name birthdate
-----------------
john 21011990
steve 14021986
bob
alice 13020198
</code></pre>
<p>i want to detect invalid value in birthdate column then change value.</p>
<p>the birthdate column use date format is "DDMMYYYY" . but in dataframe have a invalid format also as "13020198","". i want to change invalid data to 31125000 . </p>
<p>i want result like below</p>
<pre><code>name birthdate
-----------------
john 21011990
steve 14021986
bob 31125000
alice 31125000
</code></pre>
<p>thank you </p> | 59,822,864 | 2020-01-20T11:43:04.543000 | 3 | null | 1 | 57 | python|pandas | <p>You can first create non-valid date mask and then update their values:</p>
<pre><code>mask = df.birthdate.apply(lambda x: pd.to_datetime(x, format='%d%m%Y', errors='coerce')).isna()
df.loc[mask, 'birthdate'] = 31125000
name birthdate
0 john 21011990
1 steve 14021986
2 bob 31125000
3 alice 31125000
</code></pre> | 2020-01-20T12:01:26.027000 | 0 | https://pandas.pydata.org/docs/reference/api/pandas.to_datetime.html | pandas.to_datetime#
pandas.to_datetime#
pandas.to_datetime(arg, errors='raise', dayfirst=False, yearfirst=False, utc=None, format=None, exact=True, unit=None, infer_datetime_format=False, origin='unix', cache=True)[source]#
You can first create non-valid date mask and then update their values:
mask = df.birthdate.apply(lambda x: pd.to_datetime(x, format='%d%m%Y', errors='coerce')).isna()
df.loc[mask, 'birthdate'] = 31125000
name birthdate
0 john 21011990
1 steve 14021986
2 bob 31125000
3 alice 31125000
Convert argument to datetime.
This function converts a scalar, array-like, Series or
DataFrame/dict-like to a pandas datetime object.
Parameters
argint, float, str, datetime, list, tuple, 1-d array, Series, DataFrame/dict-likeThe object to convert to a datetime. If a DataFrame is provided, the
method expects minimally the following columns: "year",
"month", "day".
errors{‘ignore’, ‘raise’, ‘coerce’}, default ‘raise’
If 'raise', then invalid parsing will raise an exception.
If 'coerce', then invalid parsing will be set as NaT.
If 'ignore', then invalid parsing will return the input.
dayfirstbool, default FalseSpecify a date parse order if arg is str or is list-like.
If True, parses dates with the day first, e.g. "10/11/12"
is parsed as 2012-11-10.
Warning
dayfirst=True is not strict, but will prefer to parse
with day first. If a delimited date string cannot be parsed in
accordance with the given dayfirst option, e.g.
to_datetime(['31-12-2021']), then a warning will be shown.
yearfirstbool, default FalseSpecify a date parse order if arg is str or is list-like.
If True parses dates with the year first, e.g.
"10/11/12" is parsed as 2010-11-12.
If both dayfirst and yearfirst are True, yearfirst is
preceded (same as dateutil).
Warning
yearfirst=True is not strict, but will prefer to parse
with year first.
utcbool, default NoneControl timezone-related parsing, localization and conversion.
If True, the function always returns a timezone-aware
UTC-localized Timestamp, Series or
DatetimeIndex. To do this, timezone-naive inputs are
localized as UTC, while timezone-aware inputs are converted to UTC.
If False (default), inputs will not be coerced to UTC.
Timezone-naive inputs will remain naive, while timezone-aware ones
will keep their time offsets. Limitations exist for mixed
offsets (typically, daylight savings), see Examples section for details.
See also: pandas general documentation about timezone conversion and
localization.
formatstr, default NoneThe strftime to parse time, e.g. "%d/%m/%Y". Note that
"%f" will parse all the way up to nanoseconds. See
strftime documentation for more information on choices.
exactbool, default TrueControl how format is used:
If True, require an exact format match.
If False, allow the format to match anywhere in the target
string.
unitstr, default ‘ns’The unit of the arg (D,s,ms,us,ns) denote the unit, which is an
integer or float number. This will be based off the origin.
Example, with unit='ms' and origin='unix', this would calculate
the number of milliseconds to the unix epoch start.
infer_datetime_formatbool, default FalseIf True and no format is given, attempt to infer the format
of the datetime strings based on the first non-NaN element,
and if it can be inferred, switch to a faster method of parsing them.
In some cases this can increase the parsing speed by ~5-10x.
originscalar, default ‘unix’Define the reference date. The numeric values would be parsed as number
of units (defined by unit) since this reference date.
If 'unix' (or POSIX) time; origin is set to 1970-01-01.
If 'julian', unit must be 'D', and origin is set to
beginning of Julian Calendar. Julian day number 0 is assigned
to the day starting at noon on January 1, 4713 BC.
If Timestamp convertible, origin is set to Timestamp identified by
origin.
cachebool, default TrueIf True, use a cache of unique, converted dates to apply the
datetime conversion. May produce significant speed-up when parsing
duplicate date strings, especially ones with timezone offsets. The cache
is only used when there are at least 50 values. The presence of
out-of-bounds values will render the cache unusable and may slow down
parsing.
Changed in version 0.25.0: changed default value from False to True.
Returns
datetimeIf parsing succeeded.
Return type depends on input (types in parenthesis correspond to
fallback in case of unsuccessful timezone or out-of-range timestamp
parsing):
scalar: Timestamp (or datetime.datetime)
array-like: DatetimeIndex (or Series with
object dtype containing datetime.datetime)
Series: Series of datetime64 dtype (or
Series of object dtype containing
datetime.datetime)
DataFrame: Series of datetime64 dtype (or
Series of object dtype containing
datetime.datetime)
Raises
ParserErrorWhen parsing a date from string fails.
ValueErrorWhen another datetime conversion error happens. For example when one
of ‘year’, ‘month’, day’ columns is missing in a DataFrame, or
when a Timezone-aware datetime.datetime is found in an array-like
of mixed time offsets, and utc=False.
See also
DataFrame.astypeCast argument to a specified dtype.
to_timedeltaConvert argument to timedelta.
convert_dtypesConvert dtypes.
Notes
Many input types are supported, and lead to different output types:
scalars can be int, float, str, datetime object (from stdlib datetime
module or numpy). They are converted to Timestamp when
possible, otherwise they are converted to datetime.datetime.
None/NaN/null scalars are converted to NaT.
array-like can contain int, float, str, datetime objects. They are
converted to DatetimeIndex when possible, otherwise they are
converted to Index with object dtype, containing
datetime.datetime. None/NaN/null entries are converted to
NaT in both cases.
Series are converted to Series with datetime64
dtype when possible, otherwise they are converted to Series with
object dtype, containing datetime.datetime. None/NaN/null
entries are converted to NaT in both cases.
DataFrame/dict-like are converted to Series with
datetime64 dtype. For each row a datetime is created from assembling
the various dataframe columns. Column keys can be common abbreviations
like [‘year’, ‘month’, ‘day’, ‘minute’, ‘second’, ‘ms’, ‘us’, ‘ns’]) or
plurals of the same.
The following causes are responsible for datetime.datetime objects
being returned (possibly inside an Index or a Series with
object dtype) instead of a proper pandas designated type
(Timestamp, DatetimeIndex or Series
with datetime64 dtype):
when any input element is before Timestamp.min or after
Timestamp.max, see timestamp limitations.
when utc=False (default) and the input is an array-like or
Series containing mixed naive/aware datetime, or aware with mixed
time offsets. Note that this happens in the (quite frequent) situation when
the timezone has a daylight savings policy. In that case you may wish to
use utc=True.
Examples
Handling various input formats
Assembling a datetime from multiple columns of a DataFrame. The keys
can be common abbreviations like [‘year’, ‘month’, ‘day’, ‘minute’, ‘second’,
‘ms’, ‘us’, ‘ns’]) or plurals of the same
>>> df = pd.DataFrame({'year': [2015, 2016],
... 'month': [2, 3],
... 'day': [4, 5]})
>>> pd.to_datetime(df)
0 2015-02-04
1 2016-03-05
dtype: datetime64[ns]
Passing infer_datetime_format=True can often-times speedup a parsing
if its not an ISO8601 format exactly, but in a regular format.
>>> s = pd.Series(['3/11/2000', '3/12/2000', '3/13/2000'] * 1000)
>>> s.head()
0 3/11/2000
1 3/12/2000
2 3/13/2000
3 3/11/2000
4 3/12/2000
dtype: object
>>> %timeit pd.to_datetime(s, infer_datetime_format=True)
100 loops, best of 3: 10.4 ms per loop
>>> %timeit pd.to_datetime(s, infer_datetime_format=False)
1 loop, best of 3: 471 ms per loop
Using a unix epoch time
>>> pd.to_datetime(1490195805, unit='s')
Timestamp('2017-03-22 15:16:45')
>>> pd.to_datetime(1490195805433502912, unit='ns')
Timestamp('2017-03-22 15:16:45.433502912')
Warning
For float arg, precision rounding might happen. To prevent
unexpected behavior use a fixed-width exact type.
Using a non-unix epoch origin
>>> pd.to_datetime([1, 2, 3], unit='D',
... origin=pd.Timestamp('1960-01-01'))
DatetimeIndex(['1960-01-02', '1960-01-03', '1960-01-04'],
dtype='datetime64[ns]', freq=None)
Non-convertible date/times
If a date does not meet the timestamp limitations, passing errors='ignore'
will return the original input instead of raising any exception.
Passing errors='coerce' will force an out-of-bounds date to NaT,
in addition to forcing non-dates (or non-parseable dates) to NaT.
>>> pd.to_datetime('13000101', format='%Y%m%d', errors='ignore')
datetime.datetime(1300, 1, 1, 0, 0)
>>> pd.to_datetime('13000101', format='%Y%m%d', errors='coerce')
NaT
Timezones and time offsets
The default behaviour (utc=False) is as follows:
Timezone-naive inputs are converted to timezone-naive DatetimeIndex:
>>> pd.to_datetime(['2018-10-26 12:00', '2018-10-26 13:00:15'])
DatetimeIndex(['2018-10-26 12:00:00', '2018-10-26 13:00:15'],
dtype='datetime64[ns]', freq=None)
Timezone-aware inputs with constant time offset are converted to
timezone-aware DatetimeIndex:
>>> pd.to_datetime(['2018-10-26 12:00 -0500', '2018-10-26 13:00 -0500'])
DatetimeIndex(['2018-10-26 12:00:00-05:00', '2018-10-26 13:00:00-05:00'],
dtype='datetime64[ns, pytz.FixedOffset(-300)]', freq=None)
However, timezone-aware inputs with mixed time offsets (for example
issued from a timezone with daylight savings, such as Europe/Paris)
are not successfully converted to a DatetimeIndex. Instead a
simple Index containing datetime.datetime objects is
returned:
>>> pd.to_datetime(['2020-10-25 02:00 +0200', '2020-10-25 04:00 +0100'])
Index([2020-10-25 02:00:00+02:00, 2020-10-25 04:00:00+01:00],
dtype='object')
A mix of timezone-aware and timezone-naive inputs is converted to
a timezone-aware DatetimeIndex if the offsets of the timezone-aware
are constant:
>>> from datetime import datetime
>>> pd.to_datetime(["2020-01-01 01:00 -01:00", datetime(2020, 1, 1, 3, 0)])
DatetimeIndex(['2020-01-01 01:00:00-01:00', '2020-01-01 02:00:00-01:00'],
dtype='datetime64[ns, pytz.FixedOffset(-60)]', freq=None)
Setting utc=True solves most of the above issues:
Timezone-naive inputs are localized as UTC
>>> pd.to_datetime(['2018-10-26 12:00', '2018-10-26 13:00'], utc=True)
DatetimeIndex(['2018-10-26 12:00:00+00:00', '2018-10-26 13:00:00+00:00'],
dtype='datetime64[ns, UTC]', freq=None)
Timezone-aware inputs are converted to UTC (the output represents the
exact same datetime, but viewed from the UTC time offset +00:00).
>>> pd.to_datetime(['2018-10-26 12:00 -0530', '2018-10-26 12:00 -0500'],
... utc=True)
DatetimeIndex(['2018-10-26 17:30:00+00:00', '2018-10-26 17:00:00+00:00'],
dtype='datetime64[ns, UTC]', freq=None)
Inputs can contain both naive and aware, string or datetime, the above
rules still apply
>>> from datetime import timezone, timedelta
>>> pd.to_datetime(['2018-10-26 12:00', '2018-10-26 12:00 -0530',
... datetime(2020, 1, 1, 18),
... datetime(2020, 1, 1, 18,
... tzinfo=timezone(-timedelta(hours=1)))],
... utc=True)
DatetimeIndex(['2018-10-26 12:00:00+00:00', '2018-10-26 17:30:00+00:00',
'2020-01-01 18:00:00+00:00', '2020-01-01 19:00:00+00:00'],
dtype='datetime64[ns, UTC]', freq=None)
| 228 | 540 | How i can change invalid string pattern with default string in dataframe?
i have a dataframe like below.
name birthdate
-----------------
john 21011990
steve 14021986
bob
alice 13020198
i want to detect invalid value in birthdate column then change value.
the birthdate column use date format is "DDMMYYYY" . but in dataframe have a invalid format also as "13020198","". i want to change invalid data to 31125000 .
i want result like below
name birthdate
-----------------
john 21011990
steve 14021986
bob 31125000
alice 31125000
thank you | You can first create non-valid date mask and then update their values:
mask = df.birthdate.apply(lambda x: pd.to_datetime(x, format='%d%m%Y', errors='coerce')).isna()
df.loc[mask, 'birthdate'] = 31125000
name birthdate
0 john 21011990
1 steve 14021986
2 bob 31125000
3 alice 31125000
|
|
62,441,689 | Pandas Groupby Ranges when ranges are not continuous | <p>I have a dataframe that looks like this:</p>
<pre><code>id | A | B | C
------------------------------
1 | 0.1 | 1.2 | 100
2 | 0.2 | 1.4 | 200
3 | 0.3 | 1.6 | 300
4 | 0.4 | 1.8 | 400
5 | 0.5 | 2.0 | 500
6 | 0.6 | 2.2 | 600
7 | 0.7 | 2.4 | 700
8 | 0.8 | 2.6 | 800
9 | 0.9 | 2.8 | 900
10 | 1.0 | 3.0 | 1000
11 | 1.1 | 3.2 | 1100
</code></pre>
<p>I want to use groupby to this dataframe to group it by a range of increments for the column 'A' or 'B'.
But the ranges are not consecutive nor exclusive, they are like this:</p>
<pre><code>(0,1.1.1]
(0.2,1.1]
(0.4,1.1]
(0.6,1.1]
(0.8,1.1]
(1.0,1.1]
</code></pre>
<p>Then apply it some functions (mean and sum), so my end result will be kind of like this:</p>
<pre><code> | A_mean | B_mean | C_sum
A_bins | | |
-------------------------------------
(0,1.1.1] | 0.6 | 2.2 | 6600
(0.2,1.1] | 0.7 | 2.4 | 6300
(0.4,1.1] | 0.8 | 2.6 | 5600
(0.6,1.1] | 0.9 | 2.8 | 4500
(0.8,1.1] | 1.0 | 3.0 | 3000
(1.0,1.1] | 1.1 | 3.2 | 1100
</code></pre>
<p>I was thinking of trying <code>groupby</code> with <code>pd.cut()</code> but I think <code>pd.cut()</code> won't be able to work with those intervals.</p>
<p>So, is there any way that I can achieve that with those kinds of ranges? Or any kind of ranges that are not in the form of something like: <code>np.arange(0, 1.1+0.05, 0.2)</code></p>
<p>Thank you all</p> | 62,442,506 | 2020-06-18T03:07:44.227000 | 2 | null | 1 | 61 | python|pandas | <p>How about just using the apply function to generate the metrics you need.</p>
<pre><code>df2 = pd.DataFrame({'A_bins': [(0.1,1.1), (0.2,1.1), (0.4,1.1), (0.6,1.1), (0.8,1.1), (1.0,1.1)]})
def get_sum(row): # this is where the logic for your metrics goes
return df.loc[(row['A_bins'][0]<df['A']) & (row['A_bins'][1]>=df['A']),'C'].sum()
df2['C_sum'] = df2.apply(get_sum, axis = 1)
print (df2)
</code></pre>
<p>Output:</p>
<pre><code> A_bins C_sum
0 (0.1, 1.1) 6500.0
1 (0.2, 1.1) 6300.0
2 (0.4, 1.1) 5600.0
3 (0.6, 1.1) 4500.0
4 (0.8, 1.1) 3000.0
5 (1.0, 1.1) 1100.0
</code></pre> | 2020-06-18T04:39:35.983000 | 0 | https://pandas.pydata.org/docs/reference/api/pandas.cut.html | pandas.cut#
pandas.cut#
pandas.cut(x, bins, right=True, labels=None, retbins=False, precision=3, include_lowest=False, duplicates='raise', ordered=True)[source]#
Bin values into discrete intervals.
Use cut when you need to segment and sort data values into bins. This
function is also useful for going from a continuous variable to a
How about just using the apply function to generate the metrics you need.
df2 = pd.DataFrame({'A_bins': [(0.1,1.1), (0.2,1.1), (0.4,1.1), (0.6,1.1), (0.8,1.1), (1.0,1.1)]})
def get_sum(row): # this is where the logic for your metrics goes
return df.loc[(row['A_bins'][0]<df['A']) & (row['A_bins'][1]>=df['A']),'C'].sum()
df2['C_sum'] = df2.apply(get_sum, axis = 1)
print (df2)
Output:
A_bins C_sum
0 (0.1, 1.1) 6500.0
1 (0.2, 1.1) 6300.0
2 (0.4, 1.1) 5600.0
3 (0.6, 1.1) 4500.0
4 (0.8, 1.1) 3000.0
5 (1.0, 1.1) 1100.0
categorical variable. For example, cut could convert ages to groups of
age ranges. Supports binning into an equal number of bins, or a
pre-specified array of bins.
Parameters
xarray-likeThe input array to be binned. Must be 1-dimensional.
binsint, sequence of scalars, or IntervalIndexThe criteria to bin by.
int : Defines the number of equal-width bins in the range of x. The
range of x is extended by .1% on each side to include the minimum
and maximum values of x.
sequence of scalars : Defines the bin edges allowing for non-uniform
width. No extension of the range of x is done.
IntervalIndex : Defines the exact bins to be used. Note that
IntervalIndex for bins must be non-overlapping.
rightbool, default TrueIndicates whether bins includes the rightmost edge or not. If
right == True (the default), then the bins [1, 2, 3, 4]
indicate (1,2], (2,3], (3,4]. This argument is ignored when
bins is an IntervalIndex.
labelsarray or False, default NoneSpecifies the labels for the returned bins. Must be the same length as
the resulting bins. If False, returns only integer indicators of the
bins. This affects the type of the output container (see below).
This argument is ignored when bins is an IntervalIndex. If True,
raises an error. When ordered=False, labels must be provided.
retbinsbool, default FalseWhether to return the bins or not. Useful when bins is provided
as a scalar.
precisionint, default 3The precision at which to store and display the bins labels.
include_lowestbool, default FalseWhether the first interval should be left-inclusive or not.
duplicates{default ‘raise’, ‘drop’}, optionalIf bin edges are not unique, raise ValueError or drop non-uniques.
orderedbool, default TrueWhether the labels are ordered or not. Applies to returned types
Categorical and Series (with Categorical dtype). If True,
the resulting categorical will be ordered. If False, the resulting
categorical will be unordered (labels must be provided).
New in version 1.1.0.
Returns
outCategorical, Series, or ndarrayAn array-like object representing the respective bin for each value
of x. The type depends on the value of labels.
None (default) : returns a Series for Series x or a
Categorical for all other inputs. The values stored within
are Interval dtype.
sequence of scalars : returns a Series for Series x or a
Categorical for all other inputs. The values stored within
are whatever the type in the sequence is.
False : returns an ndarray of integers.
binsnumpy.ndarray or IntervalIndex.The computed or specified bins. Only returned when retbins=True.
For scalar or sequence bins, this is an ndarray with the computed
bins. If set duplicates=drop, bins will drop non-unique bin. For
an IntervalIndex bins, this is equal to bins.
See also
qcutDiscretize variable into equal-sized buckets based on rank or based on sample quantiles.
CategoricalArray type for storing data that come from a fixed set of values.
SeriesOne-dimensional array with axis labels (including time series).
IntervalIndexImmutable Index implementing an ordered, sliceable set.
Notes
Any NA values will be NA in the result. Out of bounds values will be NA in
the resulting Series or Categorical object.
Reference the user guide for more examples.
Examples
Discretize into three equal-sized bins.
>>> pd.cut(np.array([1, 7, 5, 4, 6, 3]), 3)
...
[(0.994, 3.0], (5.0, 7.0], (3.0, 5.0], (3.0, 5.0], (5.0, 7.0], ...
Categories (3, interval[float64, right]): [(0.994, 3.0] < (3.0, 5.0] ...
>>> pd.cut(np.array([1, 7, 5, 4, 6, 3]), 3, retbins=True)
...
([(0.994, 3.0], (5.0, 7.0], (3.0, 5.0], (3.0, 5.0], (5.0, 7.0], ...
Categories (3, interval[float64, right]): [(0.994, 3.0] < (3.0, 5.0] ...
array([0.994, 3. , 5. , 7. ]))
Discovers the same bins, but assign them specific labels. Notice that
the returned Categorical’s categories are labels and is ordered.
>>> pd.cut(np.array([1, 7, 5, 4, 6, 3]),
... 3, labels=["bad", "medium", "good"])
['bad', 'good', 'medium', 'medium', 'good', 'bad']
Categories (3, object): ['bad' < 'medium' < 'good']
ordered=False will result in unordered categories when labels are passed.
This parameter can be used to allow non-unique labels:
>>> pd.cut(np.array([1, 7, 5, 4, 6, 3]), 3,
... labels=["B", "A", "B"], ordered=False)
['B', 'B', 'A', 'A', 'B', 'B']
Categories (2, object): ['A', 'B']
labels=False implies you just want the bins back.
>>> pd.cut([0, 1, 1, 2], bins=4, labels=False)
array([0, 1, 1, 3])
Passing a Series as an input returns a Series with categorical dtype:
>>> s = pd.Series(np.array([2, 4, 6, 8, 10]),
... index=['a', 'b', 'c', 'd', 'e'])
>>> pd.cut(s, 3)
...
a (1.992, 4.667]
b (1.992, 4.667]
c (4.667, 7.333]
d (7.333, 10.0]
e (7.333, 10.0]
dtype: category
Categories (3, interval[float64, right]): [(1.992, 4.667] < (4.667, ...
Passing a Series as an input returns a Series with mapping value.
It is used to map numerically to intervals based on bins.
>>> s = pd.Series(np.array([2, 4, 6, 8, 10]),
... index=['a', 'b', 'c', 'd', 'e'])
>>> pd.cut(s, [0, 2, 4, 6, 8, 10], labels=False, retbins=True, right=False)
...
(a 1.0
b 2.0
c 3.0
d 4.0
e NaN
dtype: float64,
array([ 0, 2, 4, 6, 8, 10]))
Use drop optional when bins is not unique
>>> pd.cut(s, [0, 2, 4, 6, 10, 10], labels=False, retbins=True,
... right=False, duplicates='drop')
...
(a 1.0
b 2.0
c 3.0
d 3.0
e NaN
dtype: float64,
array([ 0, 2, 4, 6, 10]))
Passing an IntervalIndex for bins results in those categories exactly.
Notice that values not covered by the IntervalIndex are set to NaN. 0
is to the left of the first bin (which is closed on the right), and 1.5
falls between two bins.
>>> bins = pd.IntervalIndex.from_tuples([(0, 1), (2, 3), (4, 5)])
>>> pd.cut([0, 0.5, 1.5, 2.5, 4.5], bins)
[NaN, (0.0, 1.0], NaN, (2.0, 3.0], (4.0, 5.0]]
Categories (3, interval[int64, right]): [(0, 1] < (2, 3] < (4, 5]]
| 338 | 883 | Pandas Groupby Ranges when ranges are not continuous
I have a dataframe that looks like this:
id | A | B | C
------------------------------
1 | 0.1 | 1.2 | 100
2 | 0.2 | 1.4 | 200
3 | 0.3 | 1.6 | 300
4 | 0.4 | 1.8 | 400
5 | 0.5 | 2.0 | 500
6 | 0.6 | 2.2 | 600
7 | 0.7 | 2.4 | 700
8 | 0.8 | 2.6 | 800
9 | 0.9 | 2.8 | 900
10 | 1.0 | 3.0 | 1000
11 | 1.1 | 3.2 | 1100
I want to use groupby to this dataframe to group it by a range of increments for the column 'A' or 'B'.
But the ranges are not consecutive nor exclusive, they are like this:
(0,1.1.1]
(0.2,1.1]
(0.4,1.1]
(0.6,1.1]
(0.8,1.1]
(1.0,1.1]
Then apply it some functions (mean and sum), so my end result will be kind of like this:
| A_mean | B_mean | C_sum
A_bins | | |
-------------------------------------
(0,1.1.1] | 0.6 | 2.2 | 6600
(0.2,1.1] | 0.7 | 2.4 | 6300
(0.4,1.1] | 0.8 | 2.6 | 5600
(0.6,1.1] | 0.9 | 2.8 | 4500
(0.8,1.1] | 1.0 | 3.0 | 3000
(1.0,1.1] | 1.1 | 3.2 | 1100
I was thinking of trying groupby with pd.cut() but I think pd.cut() won't be able to work with those intervals.
So, is there any way that I can achieve that with those kinds of ranges? Or any kind of ranges that are not in the form of something like: np.arange(0, 1.1+0.05, 0.2)
Thank you all | How about just using the apply function to generate the metrics you need.
df2 = pd.DataFrame({'A_bins': [(0.1,1.1), (0.2,1.1), (0.4,1.1), (0.6,1.1), (0.8,1.1), (1.0,1.1)]})
def get_sum(row): # this is where the logic for your metrics goes
return df.loc[(row['A_bins'][0]<df['A']) & (row['A_bins'][1]>=df['A']),'C'].sum()
df2['C_sum'] = df2.apply(get_sum, axis = 1)
print (df2)
Output:
A_bins C_sum
0 (0.1, 1.1) 6500.0
1 (0.2, 1.1) 6300.0
2 (0.4, 1.1) 5600.0
3 (0.6, 1.1) 4500.0
4 (0.8, 1.1) 3000.0
5 (1.0, 1.1) 1100.0
|
|
69,511,132 | How to add positive and negative increments to every row based on a specific date? | <p>I have a pandas df which has 2 columns such as <code>Date, First_Date (constant)</code>.</p>
<p>I am trying to add a new column in which the value will be 0 where First_Date=Date. Then, all rows below that instance should increment in a negative way such as -1, -2, -3 etc.. and same should be true for rows above should increment in a positive way such as 1,2,3,4 etc. Please see attachment for concept visualization.</p>
<p>I am not sure if there is a pandas function to do this or if a function is better in this case. Any guidance would be great.</p>
<p><a href="https://i.stack.imgur.com/bea18.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bea18.png" alt="enter image description here" /></a></p> | 69,511,213 | 2021-10-09T23:05:50.443000 | 2 | null | 0 | 101 | python|pandas | <pre><code>>>> df = pd.DataFrame({'Date':pd.date_range('2020-01-01', '2020-01-18')})
>>> df
Date
0 2020-01-01
1 2020-01-02
2 2020-01-03
3 2020-01-04
4 2020-01-05
5 2020-01-06
6 2020-01-07
7 2020-01-08
8 2020-01-09
9 2020-01-10
10 2020-01-11
11 2020-01-12
12 2020-01-13
13 2020-01-14
14 2020-01-15
15 2020-01-16
16 2020-01-17
17 2020-01-18
</code></pre>
<p>Checkout pandas <a href="https://pandas.pydata.org/docs/reference/api/pandas.Timestamp.html" rel="nofollow noreferrer">Timestamps</a> strings to DateTime etc. No reason to work with strings and ints.</p>
<pre><code>>>> df['index'] = df - pd.Timestamp('2020-01-11')
>>> df
Date index
0 2020-01-01 -10 days
1 2020-01-02 -9 days
2 2020-01-03 -8 days
3 2020-01-04 -7 days
4 2020-01-05 -6 days
5 2020-01-06 -5 days
6 2020-01-07 -4 days
7 2020-01-08 -3 days
8 2020-01-09 -2 days
9 2020-01-10 -1 days
10 2020-01-11 0 days
11 2020-01-12 1 days
12 2020-01-13 2 days
13 2020-01-14 3 days
14 2020-01-15 4 days
15 2020-01-16 5 days
16 2020-01-17 6 days
17 2020-01-18 7 days
</code></pre>
<p>You can get your desired ints afterwards with:</p>
<pre><code>>>> df['index'].transform(lambda x: x.days)
0 -10
1 -9
2 -8
3 -7
4 -6
5 -5
6 -4
7 -3
8 -2
9 -1
10 0
11 1
12 2
13 3
14 4
15 5
16 6
17 7
</code></pre>
<p>EDIT</p>
<p>To answer more specifically since you have string dates you have to do the following first</p>
<pre><code>df[['Date', 'First_Date']] = df[['Date', 'First_Date']].astype('datetime64[ns]')
</code></pre>
<p>then you can subtract the columns and get your result:</p>
<pre><code>df['index'] = df['Date'] - df['First_Date']
</code></pre> | 2021-10-09T23:27:09.653000 | 0 | https://pandas.pydata.org/docs/reference/api/pandas.Index.shift.html | >>> df = pd.DataFrame({'Date':pd.date_range('2020-01-01', '2020-01-18')})
>>> df
Date
0 2020-01-01
1 2020-01-02
2 2020-01-03
3 2020-01-04
4 2020-01-05
5 2020-01-06
6 2020-01-07
7 2020-01-08
8 2020-01-09
9 2020-01-10
10 2020-01-11
11 2020-01-12
12 2020-01-13
13 2020-01-14
14 2020-01-15
15 2020-01-16
16 2020-01-17
17 2020-01-18
Checkout pandas Timestamps strings to DateTime etc. No reason to work with strings and ints.
>>> df['index'] = df - pd.Timestamp('2020-01-11')
>>> df
Date index
0 2020-01-01 -10 days
1 2020-01-02 -9 days
2 2020-01-03 -8 days
3 2020-01-04 -7 days
4 2020-01-05 -6 days
5 2020-01-06 -5 days
6 2020-01-07 -4 days
7 2020-01-08 -3 days
8 2020-01-09 -2 days
9 2020-01-10 -1 days
10 2020-01-11 0 days
11 2020-01-12 1 days
12 2020-01-13 2 days
13 2020-01-14 3 days
14 2020-01-15 4 days
15 2020-01-16 5 days
16 2020-01-17 6 days
17 2020-01-18 7 days
You can get your desired ints afterwards with:
>>> df['index'].transform(lambda x: x.days)
0 -10
1 -9
2 -8
3 -7
4 -6
5 -5
6 -4
7 -3
8 -2
9 -1
10 0
11 1
12 2
13 3
14 4
15 5
16 6
17 7
EDIT
To answer more specifically since you have string dates you have to do the following first
df[['Date', 'First_Date']] = df[['Date', 'First_Date']].astype('datetime64[ns]')
then you can subtract the columns and get your result:
df['index'] = df['Date'] - df['First_Date']
| 0 | 1,466 | How to add positive and negative increments to every row based on a specific date?
I have a pandas df which has 2 columns such as Date, First_Date (constant).
I am trying to add a new column in which the value will be 0 where First_Date=Date. Then, all rows below that instance should increment in a negative way such as -1, -2, -3 etc.. and same should be true for rows above should increment in a positive way such as 1,2,3,4 etc. Please see attachment for concept visualization.
I am not sure if there is a pandas function to do this or if a function is better in this case. Any guidance would be great.
| / | >>> df = pd.DataFrame({'Date':pd.date_range('2020-01-01', '2020-01-18')})
>>> df
Date
0 2020-01-01
1 2020-01-02
2 2020-01-03
3 2020-01-04
4 2020-01-05
5 2020-01-06
6 2020-01-07
7 2020-01-08
8 2020-01-09
9 2020-01-10
10 2020-01-11
11 2020-01-12
12 2020-01-13
13 2020-01-14
14 2020-01-15
15 2020-01-16
16 2020-01-17
17 2020-01-18
Checkout pandas Timestamps strings to DateTime etc. No reason to work with strings and ints.
>>> df['index'] = df - pd.Timestamp('2020-01-11')
>>> df
Date index
0 2020-01-01 -10 days
1 2020-01-02 -9 days
2 2020-01-03 -8 days
3 2020-01-04 -7 days
4 2020-01-05 -6 days
5 2020-01-06 -5 days
6 2020-01-07 -4 days
7 2020-01-08 -3 days
8 2020-01-09 -2 days
9 2020-01-10 -1 days
10 2020-01-11 0 days
11 2020-01-12 1 days
12 2020-01-13 2 days
13 2020-01-14 3 days
14 2020-01-15 4 days
15 2020-01-16 5 days
16 2020-01-17 6 days
17 2020-01-18 7 days
You can get your desired ints afterwards with:
>>> df['index'].transform(lambda x: x.days)
0 -10
1 -9
2 -8
3 -7
4 -6
5 -5
6 -4
7 -3
8 -2
9 -1
10 0
11 1
12 2
13 3
14 4
15 5
16 6
17 7
EDIT
To answer more specifically since you have string dates you have to do the following first
df[['Date', 'First_Date']] = df[['Date', 'First_Date']].astype('datetime64[ns]')
then you can subtract the columns and get your result:
df['index'] = df['Date'] - df['First_Date']
|
69,712,773 | Remove duplicates that are in included in two columns in pandas | <p>I have a dataframe that has two columns. I want to delete rows such that, for each row, it includes only one instance in the first column, but all unique values in column two are included.</p>
<p>Here is an example:</p>
<pre><code>data = [[1,100],
[1,101],
[1,102],
[1,103],
[2,102],
[2,104],
[2,105],
[3,102],
[3,107]]
df = pd.DataFrame(data,columns = ['x', 'y'])
</code></pre>
<p>The data frame looks like this:</p>
<pre><code> x y
0 1 100
1 1 101
2 1 102
3 1 103
4 2 102
5 2 104
6 2 105
7 3 102
8 3 107
</code></pre>
<p>The output dataframe would look like this:</p>
<pre><code> x y inc
0 1 100 1
1 1 101 0
2 1 102 0
3 1 103 0
4 2 102 1
5 2 104 0
6 2 105 0
7 3 102 0
8 3 107 1
</code></pre>
<p>so row 0 would be included (inc), as 1 had not been duplicated yet in column x. Rows 1-3 would be excluded, as 1 in column x had already been accounted for. Row 4 would be included, as 2 in column x had not been included yet and column y (102) had not been included (it was excluded as a duplicate). At row 7, the first instance of 3 in column x would be excluded because 102 (in column y) had already been account for in row 4. Therefore, we would skip to row 8 and include it.</p>
<p>I have tried a variety of <code>.duplicated</code> approaches, but none of them have worked so far. If you only take the first instance of a value in column x, you would exclude rows that should be included (for example row 7).</p>
<p>Any help would be appreciated.</p> | 69,713,006 | 2021-10-25T18:02:25.890000 | 2 | 1 | 2 | 105 | python|pandas | <p>One way is to use a <code>set</code> and create custom function:</p>
<pre><code>seen = set()
def func(d):
res = d[~d.isin(seen)]
if len(res):
cur = res.iat[0]
seen.add(cur)
return cur
print (df.groupby("x")["y"].apply(func))
x
1 100
2 102
3 107
Name: y, dtype: int64
</code></pre> | 2021-10-25T18:24:46.800000 | 0 | https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.drop_duplicates.html | pandas.DataFrame.drop_duplicates#
One way is to use a set and create custom function:
seen = set()
def func(d):
res = d[~d.isin(seen)]
if len(res):
cur = res.iat[0]
seen.add(cur)
return cur
print (df.groupby("x")["y"].apply(func))
x
1 100
2 102
3 107
Name: y, dtype: int64
pandas.DataFrame.drop_duplicates#
DataFrame.drop_duplicates(subset=None, *, keep='first', inplace=False, ignore_index=False)[source]#
Return DataFrame with duplicate rows removed.
Considering certain columns is optional. Indexes, including time indexes
are ignored.
Parameters
subsetcolumn label or sequence of labels, optionalOnly consider certain columns for identifying duplicates, by
default use all of the columns.
keep{‘first’, ‘last’, False}, default ‘first’Determines which duplicates (if any) to keep.
- first : Drop duplicates except for the first occurrence.
- last : Drop duplicates except for the last occurrence.
- False : Drop all duplicates.
inplacebool, default FalseWhether to modify the DataFrame rather than creating a new one.
ignore_indexbool, default FalseIf True, the resulting axis will be labeled 0, 1, …, n - 1.
New in version 1.0.0.
Returns
DataFrame or NoneDataFrame with duplicates removed or None if inplace=True.
See also
DataFrame.value_countsCount unique combinations of columns.
Examples
Consider dataset containing ramen rating.
>>> df = pd.DataFrame({
... 'brand': ['Yum Yum', 'Yum Yum', 'Indomie', 'Indomie', 'Indomie'],
... 'style': ['cup', 'cup', 'cup', 'pack', 'pack'],
... 'rating': [4, 4, 3.5, 15, 5]
... })
>>> df
brand style rating
0 Yum Yum cup 4.0
1 Yum Yum cup 4.0
2 Indomie cup 3.5
3 Indomie pack 15.0
4 Indomie pack 5.0
By default, it removes duplicate rows based on all columns.
>>> df.drop_duplicates()
brand style rating
0 Yum Yum cup 4.0
2 Indomie cup 3.5
3 Indomie pack 15.0
4 Indomie pack 5.0
To remove duplicates on specific column(s), use subset.
>>> df.drop_duplicates(subset=['brand'])
brand style rating
0 Yum Yum cup 4.0
2 Indomie cup 3.5
To remove duplicates and keep last occurrences, use keep.
>>> df.drop_duplicates(subset=['brand', 'style'], keep='last')
brand style rating
1 Yum Yum cup 4.0
2 Indomie cup 3.5
4 Indomie pack 5.0
| 35 | 318 | Remove duplicates that are in included in two columns in pandas
I have a dataframe that has two columns. I want to delete rows such that, for each row, it includes only one instance in the first column, but all unique values in column two are included.
Here is an example:
data = [[1,100],
[1,101],
[1,102],
[1,103],
[2,102],
[2,104],
[2,105],
[3,102],
[3,107]]
df = pd.DataFrame(data,columns = ['x', 'y'])
The data frame looks like this:
x y
0 1 100
1 1 101
2 1 102
3 1 103
4 2 102
5 2 104
6 2 105
7 3 102
8 3 107
The output dataframe would look like this:
x y inc
0 1 100 1
1 1 101 0
2 1 102 0
3 1 103 0
4 2 102 1
5 2 104 0
6 2 105 0
7 3 102 0
8 3 107 1
so row 0 would be included (inc), as 1 had not been duplicated yet in column x. Rows 1-3 would be excluded, as 1 in column x had already been accounted for. Row 4 would be included, as 2 in column x had not been included yet and column y (102) had not been included (it was excluded as a duplicate). At row 7, the first instance of 3 in column x would be excluded because 102 (in column y) had already been account for in row 4. Therefore, we would skip to row 8 and include it.
I have tried a variety of .duplicated approaches, but none of them have worked so far. If you only take the first instance of a value in column x, you would exclude rows that should be included (for example row 7).
Any help would be appreciated. | One way is to use a set and create custom function:
seen = set()
def func(d):
res = d[~d.isin(seen)]
if len(res):
cur = res.iat[0]
seen.add(cur)
return cur
print (df.groupby("x")["y"].apply(func))
x
1 100
2 102
3 107
Name: y, dtype: int64
|
|
68,385,969 | Calculate Time Between Orders By Customer ID | <p>I have following Porblem:</p>
<p>I want to calculate the time between orders for every Customer in Days.
My Dataframe looks like below.</p>
<pre><code> CustID OrderDate Sales
5 16838 2015-05-13 197.00
6 17986 2015-12-18 224.90
7 18191 2015-11-10 325.80
8 18191 2015-02-09 43.80
9 18191 2015-03-10 375.60
</code></pre>
<p>I found following piece of Code but I cant get it to work.</p>
<pre><code>(data.groupby('CustID')
.OrderDate
.apply(lambda x: (x-x.min).days())
.reset_index())
</code></pre> | 68,387,807 | 2021-07-14T22:59:18.797000 | 1 | null | 0 | 117 | python|pandas | <p>You need to convert the date column to a datetime first and also put it in chronological order. This code should dot he trick:</p>
<pre><code>data.OrderDate = pd.to_datetime(data.OrderDate)
data = data.sort_values(by=['OrderDate'])
data['days'] = data.groupby('CustID').OrderDate.apply(lambda x: x.diff())
</code></pre>
<p>Notice that this gives the days since the last order made by the customer. If the customer has not made a previous order then it will be returned as NaT.</p> | 2021-07-15T04:12:05.180000 | 0 | https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.diff.html | pandas.DataFrame.diff#
pandas.DataFrame.diff#
DataFrame.diff(periods=1, axis=0)[source]#
First discrete difference of element.
Calculates the difference of a DataFrame element compared with another
element in the DataFrame (default is element in previous row).
You need to convert the date column to a datetime first and also put it in chronological order. This code should dot he trick:
data.OrderDate = pd.to_datetime(data.OrderDate)
data = data.sort_values(by=['OrderDate'])
data['days'] = data.groupby('CustID').OrderDate.apply(lambda x: x.diff())
Notice that this gives the days since the last order made by the customer. If the customer has not made a previous order then it will be returned as NaT.
Parameters
periodsint, default 1Periods to shift for calculating difference, accepts negative
values.
axis{0 or ‘index’, 1 or ‘columns’}, default 0Take difference over rows (0) or columns (1).
Returns
DataFrameFirst differences of the Series.
See also
DataFrame.pct_changePercent change over given number of periods.
DataFrame.shiftShift index by desired number of periods with an optional time freq.
Series.diffFirst discrete difference of object.
Notes
For boolean dtypes, this uses operator.xor() rather than
operator.sub().
The result is calculated according to current dtype in DataFrame,
however dtype of the result is always float64.
Examples
Difference with previous row
>>> df = pd.DataFrame({'a': [1, 2, 3, 4, 5, 6],
... 'b': [1, 1, 2, 3, 5, 8],
... 'c': [1, 4, 9, 16, 25, 36]})
>>> df
a b c
0 1 1 1
1 2 1 4
2 3 2 9
3 4 3 16
4 5 5 25
5 6 8 36
>>> df.diff()
a b c
0 NaN NaN NaN
1 1.0 0.0 3.0
2 1.0 1.0 5.0
3 1.0 1.0 7.0
4 1.0 2.0 9.0
5 1.0 3.0 11.0
Difference with previous column
>>> df.diff(axis=1)
a b c
0 NaN 0 0
1 NaN -1 3
2 NaN -1 7
3 NaN -1 13
4 NaN 0 20
5 NaN 2 28
Difference with 3rd previous row
>>> df.diff(periods=3)
a b c
0 NaN NaN NaN
1 NaN NaN NaN
2 NaN NaN NaN
3 3.0 2.0 15.0
4 3.0 4.0 21.0
5 3.0 6.0 27.0
Difference with following row
>>> df.diff(periods=-1)
a b c
0 -1.0 0.0 -3.0
1 -1.0 -1.0 -5.0
2 -1.0 -1.0 -7.0
3 -1.0 -2.0 -9.0
4 -1.0 -3.0 -11.0
5 NaN NaN NaN
Overflow in input dtype
>>> df = pd.DataFrame({'a': [1, 0]}, dtype=np.uint8)
>>> df.diff()
a
0 NaN
1 255.0
| 265 | 710 | Calculate Time Between Orders By Customer ID
I have following Porblem:
I want to calculate the time between orders for every Customer in Days.
My Dataframe looks like below.
CustID OrderDate Sales
5 16838 2015-05-13 197.00
6 17986 2015-12-18 224.90
7 18191 2015-11-10 325.80
8 18191 2015-02-09 43.80
9 18191 2015-03-10 375.60
I found following piece of Code but I cant get it to work.
(data.groupby('CustID')
.OrderDate
.apply(lambda x: (x-x.min).days())
.reset_index())
| You need to convert the date column to a datetime first and also put it in chronological order. This code should dot he trick:
data.OrderDate = pd.to_datetime(data.OrderDate)
data = data.sort_values(by=['OrderDate'])
data['days'] = data.groupby('CustID').OrderDate.apply(lambda x: x.diff())
Notice that this gives the days since the last order made by the customer. If the customer has not made a previous order then it will be returned as NaT. |
|
65,677,018 | How to generate a list of names associated with specific letter grade using regex in python pandas | <p>I'm starting with this code but it generates a list of only last names and letter grades in the format ['First Last: A']. What expression can I use to create a list of names associated with a letter grade A in the format ['First', 'Last'] with names extracted from only A letter grades? More specifically, I'd like to remove everything after the ': Grade' so that I only see the name. The data has other letter grades included. I think using \s and (?= ) could be helpful but I'm not sure where to place it.</p>
<pre><code> pattern = "(\w.+:\s[A])"
matches = re.findall(pattern,file)
</code></pre>
<p>The file is a simple text file in this format:</p>
<p>First Last: Grade</p>
<p>I'd like the output to extract only names with a grade A in this format:</p>
<p>First Last</p> | 65,677,441 | 2021-01-12T01:55:02.370000 | 2 | null | 0 | 409 | python|pandas | <p>Ther are so much ways, depending on what you input data:</p>
<pre><code>re.split(':', 'First Last: Grade')
# ['First Last', ' Grade']
re.findall('^(.*?):', 'First Last: Grade')
# ['First Last']
re.findall('^(\w+\s?\w*):', 'First Last: Grade')
# ['First Last']
</code></pre> | 2021-01-12T02:59:16.437000 | 0 | https://pandas.pydata.org/docs/reference/api/pandas.Series.str.count.html | pandas.Series.str.count#
pandas.Series.str.count#
Series.str.count(pat, flags=0)[source]#
Count occurrences of pattern in each string of the Series/Index.
This function is used to count the number of times a particular regex
Ther are so much ways, depending on what you input data:
re.split(':', 'First Last: Grade')
# ['First Last', ' Grade']
re.findall('^(.*?):', 'First Last: Grade')
# ['First Last']
re.findall('^(\w+\s?\w*):', 'First Last: Grade')
# ['First Last']
pattern is repeated in each of the string elements of the
Series.
Parameters
patstrValid regular expression.
flagsint, default 0, meaning no flagsFlags for the re module. For a complete list, see here.
**kwargsFor compatibility with other string methods. Not used.
Returns
Series or IndexSame type as the calling object containing the integer counts.
See also
reStandard library module for regular expressions.
str.countStandard library version, without regular expression support.
Notes
Some characters need to be escaped when passing in pat.
eg. '$' has a special meaning in regex and must be escaped when
finding this literal character.
Examples
>>> s = pd.Series(['A', 'B', 'Aaba', 'Baca', np.nan, 'CABA', 'cat'])
>>> s.str.count('a')
0 0.0
1 0.0
2 2.0
3 2.0
4 NaN
5 0.0
6 1.0
dtype: float64
Escape '$' to find the literal dollar sign.
>>> s = pd.Series(['$', 'B', 'Aab$', '$$ca', 'C$B$', 'cat'])
>>> s.str.count('\\$')
0 1
1 0
2 1
3 2
4 2
5 0
dtype: int64
This is also available on Index
>>> pd.Index(['A', 'A', 'Aaba', 'cat']).str.count('a')
Int64Index([0, 0, 2, 1], dtype='int64')
| 229 | 477 | How to generate a list of names associated with specific letter grade using regex in python pandas
I'm starting with this code but it generates a list of only last names and letter grades in the format ['First Last: A']. What expression can I use to create a list of names associated with a letter grade A in the format ['First', 'Last'] with names extracted from only A letter grades? More specifically, I'd like to remove everything after the ': Grade' so that I only see the name. The data has other letter grades included. I think using \s and (?= ) could be helpful but I'm not sure where to place it.
pattern = "(\w.+:\s[A])"
matches = re.findall(pattern,file)
The file is a simple text file in this format:
First Last: Grade
I'd like the output to extract only names with a grade A in this format:
First Last | Index ( [ ' A ', ' A ', ' Aaba ', ' cat ' ] ). str. count / | Ther are so much ways, depending on what you input data:
re.split(':', 'First Last: Grade')
# ['First Last', ' Grade']
re.findall('^(.*?):', 'First Last: Grade')
# ['First Last']
re.findall('^(\w+\s?\w*):', 'First Last: Grade')
# ['First Last']
|
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 35