10 minutes to Mars DataFrame#

This is a short introduction to Mars DataFrame which is originated from 10 minutes to pandas.

Customarily, we import as follows:

In [1]: import mars

In [2]: import mars.tensor as mt

In [3]: import mars.dataframe as md

Now create a new default session.

In [4]: mars.new_session()
Out[4]: <mars.deploy.oscar.session.SyncSession at 0x7f520fa9bf50>

Object creation#

Creating a Series by passing a list of values, letting it create a default integer index:

In [5]: s = md.Series([1, 3, 5, mt.nan, 6, 8])

In [6]: s.execute()
Out[6]: 
0    1.0
1    3.0
2    5.0
3    NaN
4    6.0
5    8.0
dtype: float64

Creating a DataFrame by passing a Mars tensor, with a datetime index and labeled columns:

In [7]: dates = md.date_range('20130101', periods=6)

In [8]: dates.execute()
Out[8]: 
DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04',
               '2013-01-05', '2013-01-06'],
              dtype='datetime64[ns]', freq='D')

In [9]: df = md.DataFrame(mt.random.randn(6, 4), index=dates, columns=list('ABCD'))

In [10]: df.execute()
Out[10]: 
                   A         B         C         D
2013-01-01  0.131202  0.806436 -1.017856 -1.098031
2013-01-02  1.353260 -0.751281 -0.676277 -0.097598
2013-01-03 -1.340585 -0.823038  1.587496 -1.234449
2013-01-04  0.265652 -0.002158  2.200595  2.004409
2013-01-05  0.169342 -0.251483  0.153520 -0.362017
2013-01-06  0.197870  0.107158 -0.139387  0.652527

Creating a DataFrame by passing a dict of objects that can be converted to series-like.

In [11]: df2 = md.DataFrame({'A': 1.,
   ....:                     'B': md.Timestamp('20130102'),
   ....:                     'C': md.Series(1, index=list(range(4)), dtype='float32'),
   ....:                     'D': mt.array([3] * 4, dtype='int32'),
   ....:                     'E': 'foo'})
   ....: 

In [12]: df2.execute()
Out[12]: 
     A          B    C  D    E
0  1.0 2013-01-02  1.0  3  foo
1  1.0 2013-01-02  1.0  3  foo
2  1.0 2013-01-02  1.0  3  foo
3  1.0 2013-01-02  1.0  3  foo

The columns of the resulting DataFrame have different dtypes.

In [13]: df2.dtypes
Out[13]: 
A           float64
B    datetime64[ns]
C           float32
D             int32
E            object
dtype: object

Viewing data#

Here is how to view the top and bottom rows of the frame:

In [14]: df.head().execute()
Out[14]: 
                   A         B         C         D
2013-01-01  0.131202  0.806436 -1.017856 -1.098031
2013-01-02  1.353260 -0.751281 -0.676277 -0.097598
2013-01-03 -1.340585 -0.823038  1.587496 -1.234449
2013-01-04  0.265652 -0.002158  2.200595  2.004409
2013-01-05  0.169342 -0.251483  0.153520 -0.362017

In [15]: df.tail(3).execute()
Out[15]: 
                   A         B         C         D
2013-01-04  0.265652 -0.002158  2.200595  2.004409
2013-01-05  0.169342 -0.251483  0.153520 -0.362017
2013-01-06  0.197870  0.107158 -0.139387  0.652527

Display the index, columns:

In [16]: df.index.execute()
Out[16]: 
DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04',
               '2013-01-05', '2013-01-06'],
              dtype='datetime64[ns]', freq='D')

In [17]: df.columns.execute()
Out[17]: Index(['A', 'B', 'C', 'D'], dtype='object')

DataFrame.to_tensor() gives a Mars tensor representation of the underlying data. Note that this can be an expensive operation when your DataFrame has columns with different data types, which comes down to a fundamental difference between DataFrame and tensor: tensors have one dtype for the entire tensor, while DataFrames have one dtype per column. When you call DataFrame.to_tensor(), Mars DataFrame will find the tensor dtype that can hold all of the dtypes in the DataFrame. This may end up being object, which requires casting every value to a Python object.

For df, our DataFrame of all floating-point values, DataFrame.to_tensor() is fast and doesn’t require copying data.

In [18]: df.to_tensor().execute()
Out[18]: 
array([[ 1.31201729e-01,  8.06435653e-01, -1.01785596e+00,
        -1.09803098e+00],
       [ 1.35326041e+00, -7.51280648e-01, -6.76276948e-01,
        -9.75979588e-02],
       [-1.34058477e+00, -8.23037990e-01,  1.58749612e+00,
        -1.23444884e+00],
       [ 2.65652215e-01, -2.15804872e-03,  2.20059480e+00,
         2.00440905e+00],
       [ 1.69341795e-01, -2.51482962e-01,  1.53520168e-01,
        -3.62017238e-01],
       [ 1.97869804e-01,  1.07158085e-01, -1.39387347e-01,
         6.52526637e-01]])

For df2, the DataFrame with multiple dtypes, DataFrame.to_tensor() is relatively expensive.

In [19]: df2.to_tensor().execute()
Out[19]: 
array([[1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'foo'],
       [1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'foo'],
       [1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'foo'],
       [1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'foo']],
      dtype=object)

Note

DataFrame.to_tensor() does not include the index or column labels in the output.

describe() shows a quick statistic summary of your data:

In [20]: df.describe().execute()
Out[20]: 
              A         B         C         D
count  6.000000  6.000000  6.000000  6.000000
mean   0.129457 -0.152394  0.351348 -0.022527
std    0.858317  0.604573  1.277377  1.209175
min   -1.340585 -0.823038 -1.017856 -1.234449
25%    0.140737 -0.626331 -0.542055 -0.914028
50%    0.183606 -0.126821  0.007066 -0.229808
75%    0.248707  0.079829  1.229002  0.464995
max    1.353260  0.806436  2.200595  2.004409

Sorting by an axis:

In [21]: df.sort_index(axis=1, ascending=False).execute()
Out[21]: 
                   D         C         B         A
2013-01-01 -1.098031 -1.017856  0.806436  0.131202
2013-01-02 -0.097598 -0.676277 -0.751281  1.353260
2013-01-03 -1.234449  1.587496 -0.823038 -1.340585
2013-01-04  2.004409  2.200595 -0.002158  0.265652
2013-01-05 -0.362017  0.153520 -0.251483  0.169342
2013-01-06  0.652527 -0.139387  0.107158  0.197870

Sorting by values:

In [22]: df.sort_values(by='B').execute()
Out[22]: 
                   A         B         C         D
2013-01-03 -1.340585 -0.823038  1.587496 -1.234449
2013-01-02  1.353260 -0.751281 -0.676277 -0.097598
2013-01-05  0.169342 -0.251483  0.153520 -0.362017
2013-01-04  0.265652 -0.002158  2.200595  2.004409
2013-01-06  0.197870  0.107158 -0.139387  0.652527
2013-01-01  0.131202  0.806436 -1.017856 -1.098031

Selection#

Note

While standard Python / Numpy expressions for selecting and setting are intuitive and come in handy for interactive work, for production code, we recommend the optimized DataFrame data access methods, .at, .iat, .loc and .iloc.

Getting#

Selecting a single column, which yields a Series, equivalent to df.A:

In [23]: df['A'].execute()
Out[23]: 
2013-01-01    0.131202
2013-01-02    1.353260
2013-01-03   -1.340585
2013-01-04    0.265652
2013-01-05    0.169342
2013-01-06    0.197870
Freq: D, Name: A, dtype: float64

Selecting via [], which slices the rows.

In [24]: df[0:3].execute()
Out[24]: 
                   A         B         C         D
2013-01-01  0.131202  0.806436 -1.017856 -1.098031
2013-01-02  1.353260 -0.751281 -0.676277 -0.097598
2013-01-03 -1.340585 -0.823038  1.587496 -1.234449

In [25]: df['20130102':'20130104'].execute()
Out[25]: 
                   A         B         C         D
2013-01-02  1.353260 -0.751281 -0.676277 -0.097598
2013-01-03 -1.340585 -0.823038  1.587496 -1.234449
2013-01-04  0.265652 -0.002158  2.200595  2.004409

Selection by label#

For getting a cross section using a label:

In [26]: df.loc['20130101'].execute()
Out[26]: 
A    0.131202
B    0.806436
C   -1.017856
D   -1.098031
Name: 2013-01-01 00:00:00, dtype: float64

Selecting on a multi-axis by label:

In [27]: df.loc[:, ['A', 'B']].execute()
Out[27]: 
                   A         B
2013-01-01  0.131202  0.806436
2013-01-02  1.353260 -0.751281
2013-01-03 -1.340585 -0.823038
2013-01-04  0.265652 -0.002158
2013-01-05  0.169342 -0.251483
2013-01-06  0.197870  0.107158

Showing label slicing, both endpoints are included:

In [28]: df.loc['20130102':'20130104', ['A', 'B']].execute()
Out[28]: 
                   A         B
2013-01-02  1.353260 -0.751281
2013-01-03 -1.340585 -0.823038
2013-01-04  0.265652 -0.002158

Reduction in the dimensions of the returned object:

In [29]: df.loc['20130102', ['A', 'B']].execute()
Out[29]: 
A    1.353260
B   -0.751281
Name: 2013-01-02 00:00:00, dtype: float64

For getting a scalar value:

In [30]: df.loc['20130101', 'A'].execute()
Out[30]: 0.13120172881188916

For getting fast access to a scalar (equivalent to the prior method):

In [31]: df.at['20130101', 'A'].execute()
Out[31]: 0.13120172881188916

Selection by position#

Select via the position of the passed integers:

In [32]: df.iloc[3].execute()
Out[32]: 
A    0.265652
B   -0.002158
C    2.200595
D    2.004409
Name: 2013-01-04 00:00:00, dtype: float64

By integer slices, acting similar to numpy/python:

In [33]: df.iloc[3:5, 0:2].execute()
Out[33]: 
                   A         B
2013-01-04  0.265652 -0.002158
2013-01-05  0.169342 -0.251483

By lists of integer position locations, similar to the numpy/python style:

In [34]: df.iloc[[1, 2, 4], [0, 2]].execute()
Out[34]: 
                   A         C
2013-01-02  1.353260 -0.676277
2013-01-03 -1.340585  1.587496
2013-01-05  0.169342  0.153520

For slicing rows explicitly:

In [35]: df.iloc[1:3, :].execute()
Out[35]: 
                   A         B         C         D
2013-01-02  1.353260 -0.751281 -0.676277 -0.097598
2013-01-03 -1.340585 -0.823038  1.587496 -1.234449

For slicing columns explicitly:

In [36]: df.iloc[:, 1:3].execute()
Out[36]: 
                   B         C
2013-01-01  0.806436 -1.017856
2013-01-02 -0.751281 -0.676277
2013-01-03 -0.823038  1.587496
2013-01-04 -0.002158  2.200595
2013-01-05 -0.251483  0.153520
2013-01-06  0.107158 -0.139387

For getting a value explicitly:

In [37]: df.iloc[1, 1].execute()
Out[37]: -0.7512806482972055

For getting fast access to a scalar (equivalent to the prior method):

In [38]: df.iat[1, 1].execute()
Out[38]: -0.7512806482972055

Boolean indexing#

Using a single column’s values to select data.

In [39]: df[df['A'] > 0].execute()
Out[39]: 
                   A         B         C         D
2013-01-01  0.131202  0.806436 -1.017856 -1.098031
2013-01-02  1.353260 -0.751281 -0.676277 -0.097598
2013-01-04  0.265652 -0.002158  2.200595  2.004409
2013-01-05  0.169342 -0.251483  0.153520 -0.362017
2013-01-06  0.197870  0.107158 -0.139387  0.652527

Selecting values from a DataFrame where a boolean condition is met.

In [40]: df[df > 0].execute()
Out[40]: 
                   A         B         C         D
2013-01-01  0.131202  0.806436       NaN       NaN
2013-01-02  1.353260       NaN       NaN       NaN
2013-01-03       NaN       NaN  1.587496       NaN
2013-01-04  0.265652       NaN  2.200595  2.004409
2013-01-05  0.169342       NaN  0.153520       NaN
2013-01-06  0.197870  0.107158       NaN  0.652527

Operations#

Stats#

Operations in general exclude missing data.

Performing a descriptive statistic:

In [41]: df.mean().execute()
Out[41]: 
A    0.129457
B   -0.152394
C    0.351348
D   -0.022527
dtype: float64

Same operation on the other axis:

In [42]: df.mean(1).execute()
Out[42]: 
2013-01-01   -0.294562
2013-01-02   -0.042974
2013-01-03   -0.452644
2013-01-04    1.117125
2013-01-05   -0.072660
2013-01-06    0.204542
Freq: D, dtype: float64

Operating with objects that have different dimensionality and need alignment. In addition, Mars DataFrame automatically broadcasts along the specified dimension.

In [43]: s = md.Series([1, 3, 5, mt.nan, 6, 8], index=dates).shift(2)

In [44]: s.execute()
Out[44]: 
2013-01-01    NaN
2013-01-02    NaN
2013-01-03    1.0
2013-01-04    3.0
2013-01-05    5.0
2013-01-06    NaN
Freq: D, dtype: float64

In [45]: df.sub(s, axis='index').execute()
Out[45]: 
                   A         B         C         D
2013-01-01       NaN       NaN       NaN       NaN
2013-01-02       NaN       NaN       NaN       NaN
2013-01-03 -2.340585 -1.823038  0.587496 -2.234449
2013-01-04 -2.734348 -3.002158 -0.799405 -0.995591
2013-01-05 -4.830658 -5.251483 -4.846480 -5.362017
2013-01-06       NaN       NaN       NaN       NaN

Apply#

Applying functions to the data:

In [46]: df.apply(lambda x: x.max() - x.min()).execute()
Out[46]: 
A    2.693845
B    1.629474
C    3.218451
D    3.238858
dtype: float64

String Methods#

Series is equipped with a set of string processing methods in the str attribute that make it easy to operate on each element of the array, as in the code snippet below. Note that pattern-matching in str generally uses regular expressions by default (and in some cases always uses them). See more at Vectorized String Methods.

In [47]: s = md.Series(['A', 'B', 'C', 'Aaba', 'Baca', mt.nan, 'CABA', 'dog', 'cat'])

In [48]: s.str.lower().execute()
Out[48]: 
0       a
1       b
2       c
3    aaba
4    baca
5     NaN
6    caba
7     dog
8     cat
dtype: object

Merge#

Concat#

Mars DataFrame provides various facilities for easily combining together Series and DataFrame objects with various kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations.

Concatenating DataFrame objects together with concat():

In [49]: df = md.DataFrame(mt.random.randn(10, 4))

In [50]: df.execute()
Out[50]: 
          0         1         2         3
0  0.340665 -0.017870  2.639970 -0.661813
1  1.658088  1.856396 -1.750673 -0.167159
2  1.110126  0.196777 -0.352805 -0.196033
3  0.084739  0.345062  0.198985 -0.202668
4  0.054834 -0.603046  1.132117 -0.184715
5  1.224022  1.182662  0.047055 -0.422539
6 -0.625837  0.841115 -0.295166 -0.800794
7 -1.819517  1.898076  0.933760 -0.396641
8 -0.982989 -1.757011 -0.515680 -0.810376
9 -0.917127 -0.092858 -0.501922 -0.471996

# break it into pieces
In [51]: pieces = [df[:3], df[3:7], df[7:]]

In [52]: md.concat(pieces).execute()
Out[52]: 
          0         1         2         3
0  0.340665 -0.017870  2.639970 -0.661813
1  1.658088  1.856396 -1.750673 -0.167159
2  1.110126  0.196777 -0.352805 -0.196033
3  0.084739  0.345062  0.198985 -0.202668
4  0.054834 -0.603046  1.132117 -0.184715
5  1.224022  1.182662  0.047055 -0.422539
6 -0.625837  0.841115 -0.295166 -0.800794
7 -1.819517  1.898076  0.933760 -0.396641
8 -0.982989 -1.757011 -0.515680 -0.810376
9 -0.917127 -0.092858 -0.501922 -0.471996

Note

Adding a column to a DataFrame is relatively fast. However, adding a row requires a copy, and may be expensive. We recommend passing a pre-built list of records to the DataFrame constructor instead of building a DataFrame by iteratively appending records to it.

Join#

SQL style merges. See the Database style joining section.

In [53]: left = md.DataFrame({'key': ['foo', 'foo'], 'lval': [1, 2]})

In [54]: right = md.DataFrame({'key': ['foo', 'foo'], 'rval': [4, 5]})

In [55]: left.execute()
Out[55]: 
   key  lval
0  foo     1
1  foo     2

In [56]: right.execute()
Out[56]: 
   key  rval
0  foo     4
1  foo     5

In [57]: md.merge(left, right, on='key').execute()
Out[57]: 
   key  lval  rval
0  foo     1     4
1  foo     1     5
2  foo     2     4
3  foo     2     5

Another example that can be given is:

In [58]: left = md.DataFrame({'key': ['foo', 'bar'], 'lval': [1, 2]})

In [59]: right = md.DataFrame({'key': ['foo', 'bar'], 'rval': [4, 5]})

In [60]: left.execute()
Out[60]: 
   key  lval
0  foo     1
1  bar     2

In [61]: right.execute()
Out[61]: 
   key  rval
0  foo     4
1  bar     5

In [62]: md.merge(left, right, on='key').execute()
Out[62]: 
   key  lval  rval
0  foo     1     4
1  bar     2     5

Grouping#

By “group by” we are referring to a process involving one or more of the following steps:

  • Splitting the data into groups based on some criteria

  • Applying a function to each group independently

  • Combining the results into a data structure

In [63]: df = md.DataFrame({'A': ['foo', 'bar', 'foo', 'bar',
   ....:                          'foo', 'bar', 'foo', 'foo'],
   ....:                    'B': ['one', 'one', 'two', 'three',
   ....:                          'two', 'two', 'one', 'three'],
   ....:                    'C': mt.random.randn(8),
   ....:                    'D': mt.random.randn(8)})
   ....: 

In [64]: df.execute()
Out[64]: 
     A      B         C         D
0  foo    one  0.202711 -0.448300
1  bar    one -0.985369 -1.963379
2  foo    two -0.061727  0.417080
3  bar  three -1.942882  0.664544
4  foo    two -0.138829 -1.339837
5  bar    two  0.159911 -0.667307
6  foo    one -1.225715 -1.105817
7  foo  three -1.378385  1.021276

Grouping and then applying the sum() function to the resulting groups.

In [65]: df.groupby('A').sum().execute()
Out[65]: 
            C         D
A                      
bar -2.768340 -1.966141
foo -2.601944 -1.455598

Grouping by multiple columns forms a hierarchical index, and again we can apply the sum function.

In [66]: df.groupby(['A', 'B']).sum().execute()
Out[66]: 
                  C         D
A   B                        
bar one   -0.985369 -1.963379
    three -1.942882  0.664544
    two    0.159911 -0.667307
foo one   -1.023003 -1.554117
    three -1.378385  1.021276
    two   -0.200556 -0.922757

Plotting#

We use the standard convention for referencing the matplotlib API:

In [67]: import matplotlib.pyplot as plt

In [68]: plt.close('all')
In [69]: ts = md.Series(mt.random.randn(1000),
   ....:                index=md.date_range('1/1/2000', periods=1000))
   ....: 

In [70]: ts = ts.cumsum()

In [71]: ts.plot()
Out[71]: <AxesSubplot:>
../../_images/series_plot_basic.png

On a DataFrame, the plot() method is a convenience to plot all of the columns with labels:

In [72]: df = md.DataFrame(mt.random.randn(1000, 4), index=ts.index,
   ....:                   columns=['A', 'B', 'C', 'D'])
   ....: 

In [73]: df = df.cumsum()

In [74]: plt.figure()
Out[74]: <Figure size 640x480 with 0 Axes>

In [75]: df.plot()
Out[75]: <AxesSubplot:>

In [76]: plt.legend(loc='best')
Out[76]: <matplotlib.legend.Legend at 0x7f52185464d0>
../../_images/frame_plot_basic.png

Getting data in/out#

CSV#

In [77]: df.to_csv('foo.csv').execute()
Out[77]: 
Empty DataFrame
Columns: []
Index: []

Reading from a csv file.

In [78]: md.read_csv('foo.csv').execute()
Out[78]: 
     Unnamed: 0         A          B          C          D
0    2000-01-01  0.939091  -1.039370   1.186373  -0.435623
1    2000-01-02  1.015892  -1.445657   0.775589  -0.708444
2    2000-01-03  0.126036   0.038574   0.772313  -1.454835
3    2000-01-04 -1.923846  -0.701713   2.196586  -0.945821
4    2000-01-05 -0.561005   1.171094   0.708817   0.787630
..          ...       ...        ...        ...        ...
995  2002-09-22 -1.575408  15.838433 -24.230602  65.672469
996  2002-09-23 -1.636742  18.760213 -25.933767  66.910663
997  2002-09-24 -2.154960  19.675557 -25.497723  66.363891
998  2002-09-25 -4.315121  20.019056 -25.150312  66.148998
999  2002-09-26 -4.281877  20.081122 -24.974266  66.418760

[1000 rows x 5 columns]