This is a short introduction to Mars DataFrame which is originated from 10 minutes to pandas.
Customarily, we import as follows:
In [1]: import mars.tensor as mt In [2]: import mars.dataframe as md
Creating a Series by passing a list of values, letting it create a default integer index:
Series
In [3]: s = md.Series([1, 3, 5, mt.nan, 6, 8]) In [4]: s.execute() Out[4]: 0 1.0 1 3.0 2 5.0 3 NaN 4 6.0 5 8.0 dtype: float64
Creating a DataFrame by passing a Mars tensor, with a datetime index and labeled columns:
DataFrame
In [5]: dates = md.date_range('20130101', periods=6) In [6]: dates.execute() Out[6]: DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04', '2013-01-05', '2013-01-06'], dtype='datetime64[ns]', freq='D') In [7]: df = md.DataFrame(mt.random.randn(6, 4), index=dates, columns=list('ABCD')) In [8]: df.execute() Out[8]: A B C D 2013-01-01 0.039785 -0.188372 0.904168 1.838516 2013-01-02 0.393376 -1.220780 -1.070819 0.637269 2013-01-03 1.128161 -0.379367 0.273642 -0.398398 2013-01-04 -0.280203 -0.637178 -1.694790 1.355583 2013-01-05 0.802582 -0.069426 -1.038865 -0.113133 2013-01-06 -0.463181 -1.052488 1.263121 -0.315807
Creating a DataFrame by passing a dict of objects that can be converted to series-like.
In [9]: df2 = md.DataFrame({'A': 1., ...: 'B': md.Timestamp('20130102'), ...: 'C': md.Series(1, index=list(range(4)), dtype='float32'), ...: 'D': mt.array([3] * 4, dtype='int32'), ...: 'E': 'foo'}) ...: In [10]: df2.execute() Out[10]: A B C D E 0 1.0 2013-01-02 1.0 3 foo 1 1.0 2013-01-02 1.0 3 foo 2 1.0 2013-01-02 1.0 3 foo 3 1.0 2013-01-02 1.0 3 foo
The columns of the resulting DataFrame have different dtypes.
In [11]: df2.dtypes Out[11]: A float64 B datetime64[ns] C float32 D int32 E object dtype: object
Here is how to view the top and bottom rows of the frame:
In [12]: df.head().execute() Out[12]: A B C D 2013-01-01 0.039785 -0.188372 0.904168 1.838516 2013-01-02 0.393376 -1.220780 -1.070819 0.637269 2013-01-03 1.128161 -0.379367 0.273642 -0.398398 2013-01-04 -0.280203 -0.637178 -1.694790 1.355583 2013-01-05 0.802582 -0.069426 -1.038865 -0.113133 In [13]: df.tail(3).execute() Out[13]: A B C D 2013-01-04 -0.280203 -0.637178 -1.694790 1.355583 2013-01-05 0.802582 -0.069426 -1.038865 -0.113133 2013-01-06 -0.463181 -1.052488 1.263121 -0.315807
Display the index, columns:
In [14]: df.index.execute() Out[14]: DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04', '2013-01-05', '2013-01-06'], dtype='datetime64[ns]', freq='D') In [15]: df.columns.execute() Out[15]: Index(['A', 'B', 'C', 'D'], dtype='object')
DataFrame.to_tensor() gives a Mars tensor representation of the underlying data. Note that this can be an expensive operation when your DataFrame has columns with different data types, which comes down to a fundamental difference between DataFrame and tensor: tensors have one dtype for the entire tensor, while DataFrames have one dtype per column. When you call DataFrame.to_tensor(), Mars DataFrame will find the tensor dtype that can hold all of the dtypes in the DataFrame. This may end up being object, which requires casting every value to a Python object.
DataFrame.to_tensor()
object
For df, our DataFrame of all floating-point values, DataFrame.to_tensor() is fast and doesn’t require copying data.
df
In [16]: df.to_tensor().execute() Out[16]: array([[ 0.03978516, -0.18837215, 0.9041681 , 1.83851562], [ 0.3933763 , -1.22078023, -1.07081854, 0.63726925], [ 1.12816132, -0.37936728, 0.27364242, -0.39839815], [-0.2802029 , -0.63717819, -1.69478977, 1.35558251], [ 0.80258183, -0.06942621, -1.03886539, -0.11313261], [-0.46318118, -1.05248845, 1.26312122, -0.31580712]])
For df2, the DataFrame with multiple dtypes, DataFrame.to_tensor() is relatively expensive.
df2
In [17]: df2.to_tensor().execute() Out[17]: array([[1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'foo'], [1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'foo'], [1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'foo'], [1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'foo']], dtype=object)
Note
DataFrame.to_tensor() does not include the index or column labels in the output.
describe() shows a quick statistic summary of your data:
describe()
In [18]: df.describe().execute() Out[18]: A B C D count 6.000000 6.000000 6.000000 6.000000 mean 0.270087 -0.591269 -0.227257 0.500672 std 0.621061 0.467047 1.206334 0.937132 min -0.463181 -1.220780 -1.694790 -0.398398 25% -0.200206 -0.948661 -1.062830 -0.265138 50% 0.216581 -0.508273 -0.382611 0.262068 75% 0.700280 -0.236121 0.746537 1.176004 max 1.128161 -0.069426 1.263121 1.838516
Sorting by an axis:
In [19]: df.sort_index(axis=1, ascending=False).execute() Out[19]: D C B A 2013-01-01 1.838516 0.904168 -0.188372 0.039785 2013-01-02 0.637269 -1.070819 -1.220780 0.393376 2013-01-03 -0.398398 0.273642 -0.379367 1.128161 2013-01-04 1.355583 -1.694790 -0.637178 -0.280203 2013-01-05 -0.113133 -1.038865 -0.069426 0.802582 2013-01-06 -0.315807 1.263121 -1.052488 -0.463181
Sorting by values:
In [20]: df.sort_values(by='B').execute() Out[20]: A B C D 2013-01-02 0.393376 -1.220780 -1.070819 0.637269 2013-01-06 -0.463181 -1.052488 1.263121 -0.315807 2013-01-04 -0.280203 -0.637178 -1.694790 1.355583 2013-01-03 1.128161 -0.379367 0.273642 -0.398398 2013-01-01 0.039785 -0.188372 0.904168 1.838516 2013-01-05 0.802582 -0.069426 -1.038865 -0.113133
While standard Python / Numpy expressions for selecting and setting are intuitive and come in handy for interactive work, for production code, we recommend the optimized DataFrame data access methods, .at, .iat, .loc and .iloc.
.at
.iat
.loc
.iloc
Selecting a single column, which yields a Series, equivalent to df.A:
df.A
In [21]: df['A'].execute() Out[21]: 2013-01-01 0.039785 2013-01-02 0.393376 2013-01-03 1.128161 2013-01-04 -0.280203 2013-01-05 0.802582 2013-01-06 -0.463181 Freq: D, Name: A, dtype: float64
Selecting via [], which slices the rows.
[]
In [22]: df[0:3].execute() Out[22]: A B C D 2013-01-01 0.039785 -0.188372 0.904168 1.838516 2013-01-02 0.393376 -1.220780 -1.070819 0.637269 2013-01-03 1.128161 -0.379367 0.273642 -0.398398 In [23]: df['20130102':'20130104'].execute() Out[23]: A B C D 2013-01-02 0.393376 -1.220780 -1.070819 0.637269 2013-01-03 1.128161 -0.379367 0.273642 -0.398398 2013-01-04 -0.280203 -0.637178 -1.694790 1.355583
For getting a cross section using a label:
In [24]: df.loc['20130101'].execute() Out[24]: A 0.039785 B -0.188372 C 0.904168 D 1.838516 Name: 2013-01-01 00:00:00, dtype: float64
Selecting on a multi-axis by label:
In [25]: df.loc[:, ['A', 'B']].execute() Out[25]: A B 2013-01-01 0.039785 -0.188372 2013-01-02 0.393376 -1.220780 2013-01-03 1.128161 -0.379367 2013-01-04 -0.280203 -0.637178 2013-01-05 0.802582 -0.069426 2013-01-06 -0.463181 -1.052488
Showing label slicing, both endpoints are included:
In [26]: df.loc['20130102':'20130104', ['A', 'B']].execute() Out[26]: A B 2013-01-02 0.393376 -1.220780 2013-01-03 1.128161 -0.379367 2013-01-04 -0.280203 -0.637178
Reduction in the dimensions of the returned object:
In [27]: df.loc['20130102', ['A', 'B']].execute() Out[27]: A 0.393376 B -1.220780 Name: 2013-01-02 00:00:00, dtype: float64
For getting a scalar value:
In [28]: df.loc['20130101', 'A'].execute() Out[28]: 0.03978515801199635
For getting fast access to a scalar (equivalent to the prior method):
In [29]: df.at['20130101', 'A'].execute() Out[29]: 0.03978515801199635
Select via the position of the passed integers:
In [30]: df.iloc[3].execute() Out[30]: A -0.280203 B -0.637178 C -1.694790 D 1.355583 Name: 2013-01-04 00:00:00, dtype: float64
By integer slices, acting similar to numpy/python:
In [31]: df.iloc[3:5, 0:2].execute() Out[31]: A B 2013-01-04 -0.280203 -0.637178 2013-01-05 0.802582 -0.069426
By lists of integer position locations, similar to the numpy/python style:
In [32]: df.iloc[[1, 2, 4], [0, 2]].execute() Out[32]: A C 2013-01-02 0.393376 -1.070819 2013-01-03 1.128161 0.273642 2013-01-05 0.802582 -1.038865
For slicing rows explicitly:
In [33]: df.iloc[1:3, :].execute() Out[33]: A B C D 2013-01-02 0.393376 -1.220780 -1.070819 0.637269 2013-01-03 1.128161 -0.379367 0.273642 -0.398398
For slicing columns explicitly:
In [34]: df.iloc[:, 1:3].execute() Out[34]: B C 2013-01-01 -0.188372 0.904168 2013-01-02 -1.220780 -1.070819 2013-01-03 -0.379367 0.273642 2013-01-04 -0.637178 -1.694790 2013-01-05 -0.069426 -1.038865 2013-01-06 -1.052488 1.263121
For getting a value explicitly:
In [35]: df.iloc[1, 1].execute() Out[35]: -1.220780230100412
In [36]: df.iat[1, 1].execute() Out[36]: -1.220780230100412
Using a single column’s values to select data.
In [37]: df[df['A'] > 0].execute() Out[37]: A B C D 2013-01-01 0.039785 -0.188372 0.904168 1.838516 2013-01-02 0.393376 -1.220780 -1.070819 0.637269 2013-01-03 1.128161 -0.379367 0.273642 -0.398398 2013-01-05 0.802582 -0.069426 -1.038865 -0.113133
Selecting values from a DataFrame where a boolean condition is met.
In [38]: df[df > 0].execute() Out[38]: A B C D 2013-01-01 0.039785 NaN 0.904168 1.838516 2013-01-02 0.393376 NaN NaN 0.637269 2013-01-03 1.128161 NaN 0.273642 NaN 2013-01-04 NaN NaN NaN 1.355583 2013-01-05 0.802582 NaN NaN NaN 2013-01-06 NaN NaN 1.263121 NaN
Operations in general exclude missing data.
Performing a descriptive statistic:
In [39]: df.mean().execute() Out[39]: A 0.270087 B -0.591269 C -0.227257 D 0.500672 dtype: float64
Same operation on the other axis:
In [40]: df.mean(1).execute() Out[40]: 2013-01-01 0.648524 2013-01-02 -0.315238 2013-01-03 0.156010 2013-01-04 -0.314147 2013-01-05 -0.104711 2013-01-06 -0.142089 Freq: D, dtype: float64
Operating with objects that have different dimensionality and need alignment. In addition, Mars DataFrame automatically broadcasts along the specified dimension.
In [41]: s = md.Series([1, 3, 5, mt.nan, 6, 8], index=dates).shift(2) In [42]: s.execute() Out[42]: 2013-01-01 NaN 2013-01-02 NaN 2013-01-03 1.0 2013-01-04 3.0 2013-01-05 5.0 2013-01-06 NaN Freq: D, dtype: float64 In [43]: df.sub(s, axis='index').execute() Out[43]: A B C D 2013-01-01 NaN NaN NaN NaN 2013-01-02 NaN NaN NaN NaN 2013-01-03 0.128161 -1.379367 -0.726358 -1.398398 2013-01-04 -3.280203 -3.637178 -4.694790 -1.644417 2013-01-05 -4.197418 -5.069426 -6.038865 -5.113133 2013-01-06 NaN NaN NaN NaN
Applying functions to the data:
In [44]: df.apply(lambda x: x.max() - x.min()).execute() Out[44]: A 1.591342 B 1.151354 C 2.957911 D 2.236914 dtype: float64
Series is equipped with a set of string processing methods in the str attribute that make it easy to operate on each element of the array, as in the code snippet below. Note that pattern-matching in str generally uses regular expressions by default (and in some cases always uses them). See more at Vectorized String Methods.
In [45]: s = md.Series(['A', 'B', 'C', 'Aaba', 'Baca', mt.nan, 'CABA', 'dog', 'cat']) In [46]: s.str.lower().execute() Out[46]: 0 a 1 b 2 c 3 aaba 4 baca 5 NaN 6 caba 7 dog 8 cat dtype: object
Mars DataFrame provides various facilities for easily combining together Series and DataFrame objects with various kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations.
Concatenating DataFrame objects together with concat():
concat()
In [47]: df = md.DataFrame(mt.random.randn(10, 4)) In [48]: df.execute() Out[48]: 0 1 2 3 0 2.147445 -0.067953 -1.679181 -0.833653 1 1.980446 -1.776451 -1.585992 -0.490787 2 0.001157 2.535161 1.517359 0.358273 3 0.219983 -0.051507 -1.095765 -0.772121 4 0.621682 -1.272987 0.388516 -0.955901 5 -0.491198 -0.363230 -0.381387 -0.201356 6 -0.689568 0.166605 1.171567 0.066894 7 -0.486833 -1.077018 -0.282597 2.046952 8 0.090990 -2.081635 -2.079520 0.286736 9 1.580950 -1.092175 0.067889 -1.280085 # break it into pieces In [49]: pieces = [df[:3], df[3:7], df[7:]] In [50]: md.concat(pieces).execute() Out[50]: 0 1 2 3 0 2.147445 -0.067953 -1.679181 -0.833653 1 1.980446 -1.776451 -1.585992 -0.490787 2 0.001157 2.535161 1.517359 0.358273 3 0.219983 -0.051507 -1.095765 -0.772121 4 0.621682 -1.272987 0.388516 -0.955901 5 -0.491198 -0.363230 -0.381387 -0.201356 6 -0.689568 0.166605 1.171567 0.066894 7 -0.486833 -1.077018 -0.282597 2.046952 8 0.090990 -2.081635 -2.079520 0.286736 9 1.580950 -1.092175 0.067889 -1.280085
Adding a column to a DataFrame is relatively fast. However, adding a row requires a copy, and may be expensive. We recommend passing a pre-built list of records to the DataFrame constructor instead of building a DataFrame by iteratively appending records to it.
SQL style merges. See the Database style joining section.
In [51]: left = md.DataFrame({'key': ['foo', 'foo'], 'lval': [1, 2]}) In [52]: right = md.DataFrame({'key': ['foo', 'foo'], 'rval': [4, 5]}) In [53]: left.execute() Out[53]: key lval 0 foo 1 1 foo 2 In [54]: right.execute() Out[54]: key rval 0 foo 4 1 foo 5 In [55]: md.merge(left, right, on='key').execute() Out[55]: key lval rval 0 foo 1 4 1 foo 1 5 2 foo 2 4 3 foo 2 5
Another example that can be given is:
In [56]: left = md.DataFrame({'key': ['foo', 'bar'], 'lval': [1, 2]}) In [57]: right = md.DataFrame({'key': ['foo', 'bar'], 'rval': [4, 5]}) In [58]: left.execute() Out[58]: key lval 0 foo 1 1 bar 2 In [59]: right.execute() Out[59]: key rval 0 foo 4 1 bar 5 In [60]: md.merge(left, right, on='key').execute() Out[60]: key lval rval 0 foo 1 4 1 bar 2 5
By “group by” we are referring to a process involving one or more of the following steps:
Splitting the data into groups based on some criteria Applying a function to each group independently Combining the results into a data structure
Splitting the data into groups based on some criteria
Applying a function to each group independently
Combining the results into a data structure
In [61]: df = md.DataFrame({'A': ['foo', 'bar', 'foo', 'bar', ....: 'foo', 'bar', 'foo', 'foo'], ....: 'B': ['one', 'one', 'two', 'three', ....: 'two', 'two', 'one', 'three'], ....: 'C': mt.random.randn(8), ....: 'D': mt.random.randn(8)}) ....: In [62]: df.execute() Out[62]: A B C D 0 foo one -0.934587 -1.395785 1 bar one -0.931728 -1.355472 2 foo two -0.615953 -0.502464 3 bar three -0.799975 -0.118434 4 foo two -1.313550 -0.277331 5 bar two -1.163293 1.032541 6 foo one -0.428732 -0.961337 7 foo three 0.322676 -0.547266
Grouping and then applying the sum() function to the resulting groups.
sum()
In [63]: df.groupby('A').sum().execute() Out[63]: C D A bar -2.894996 -0.441365 foo -2.970146 -3.684182
Grouping by multiple columns forms a hierarchical index, and again we can apply the sum function.
In [64]: df.groupby(['A', 'B']).sum().execute() Out[64]: C D A B foo one -1.363318 -2.357121 two -1.929504 -0.779795 three 0.322676 -0.547266 bar one -0.931728 -1.355472 two -1.163293 1.032541 three -0.799975 -0.118434
We use the standard convention for referencing the matplotlib API:
In [65]: import matplotlib.pyplot as plt In [66]: plt.close('all')
In [67]: ts = md.Series(mt.random.randn(1000), ....: index=md.date_range('1/1/2000', periods=1000)) ....: In [68]: ts = ts.cumsum() In [69]: ts.plot() Out[69]: <AxesSubplot:>
On a DataFrame, the plot() method is a convenience to plot all of the columns with labels:
plot()
In [70]: df = md.DataFrame(mt.random.randn(1000, 4), index=ts.index, ....: columns=['A', 'B', 'C', 'D']) ....: In [71]: df = df.cumsum() In [72]: plt.figure() Out[72]: <Figure size 640x480 with 0 Axes> In [73]: df.plot() Out[73]: <AxesSubplot:> In [74]: plt.legend(loc='best') Out[74]: <matplotlib.legend.Legend at 0x7f1167d90a90>
In [75]: df.to_csv('foo.csv').execute() Out[75]: Empty DataFrame Columns: [] Index: []
Reading from a csv file.
In [76]: md.read_csv('foo.csv').execute() Out[76]: Unnamed: 0 A B C D 0 2000-01-01 -2.014476 -2.824053 -1.321149 -0.684440 1 2000-01-02 -1.555855 -4.000812 -1.349740 -1.011120 2 2000-01-03 -1.195056 -4.587659 -1.292865 -2.405421 3 2000-01-04 0.172848 -4.773932 -0.357737 -3.259702 4 2000-01-05 0.964374 -5.163434 -0.267826 -3.208199 .. ... ... ... ... ... 995 2002-09-22 17.183405 8.268086 -17.790193 -24.465205 996 2002-09-23 16.256185 6.922495 -17.098417 -24.966699 997 2002-09-24 17.065281 9.225703 -17.167156 -25.026606 998 2002-09-25 18.037321 9.468116 -16.718285 -22.451462 999 2002-09-26 19.692603 10.528481 -16.154121 -21.879568 [1000 rows x 5 columns]