Averaging column values by several intervals in PythonCalling an external command from PythonWhat are metaclasses in Python?Finding the index of an item given a list containing it in PythonHow can I safely create a nested directory?Does Python have a ternary conditional operator?How do I sort a dictionary by value?Does Python have a string 'contains' substring method?Renaming columns in pandasDelete column from pandas DataFrameHow to select rows from a DataFrame based on column values?

Cage Length (Rear Derallieur) and Total Capacity

Stack data structure in python 3

Why was ambassador Sondland involved in Ukraine?

Why are rain clouds darker?

How to remove SQL Server Error Logs

Idiomatic way to distinguish two zero-arg constructors

Do repulsorlifts work upside-down?

how do I change just one character in a string when there more than one of that character?

What does this docker log entry mean?

Two button calculator part 2

UK visitors visa needed fast for badly injured family member

What are the minimum element requirements for a star?

Using characters to delimit commands (like markdown)

Why does rapeseed oil turn sticky but coconut oil doesn't?

Are there any dishes that can only be cooked with a microwave?

Is there any way to get an instant or sorcery on the field as a permanent? What would happen if this occurred?

Why would prey creatures not hate predator creatures?

Why are session states synchronized with high availability?

What does "Filiane" mean?

Why cant Ridge Regression benift from negative lamda?

Does "solicit" mean the solicitor must receive what is being solicited in context of 52 U.S. Code Section 30121?

Can the treble clef be used instead of the bass clef in piano music?

Locked folder with obscure app from Sourceforge, now cannot unlock folder

Why has no one requested the tape of the Trump/Ukraine call?



Averaging column values by several intervals in Python


Calling an external command from PythonWhat are metaclasses in Python?Finding the index of an item given a list containing it in PythonHow can I safely create a nested directory?Does Python have a ternary conditional operator?How do I sort a dictionary by value?Does Python have a string 'contains' substring method?Renaming columns in pandasDelete column from pandas DataFrameHow to select rows from a DataFrame based on column values?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty
margin-bottom:0;









2


















I have a dataframe with depth and other value columns:



data = 'Depth': [1.0, 1.0, 1.5, 2.0, 2.5, 2.5, 3.0, 3.5, 4.0, 4.0, 5.0, 5.5, 6.0], 
'Value1':[44, 46, 221, 12, 47, 44, 67, 90, 100, 111, 112, 120, 122],
'Value2': [55, 65, 76, 45, 55, 58, 23, 12, 32, 20, 22, 26, 36]

df = pd.DataFrame(data)


As you can see sometime there are repetitions in the Depth.



I'd like to be able to somehow groupby intervals and average over them.
For example an output I desire would be:



intervals = [1.0, 2.0]


Taking a list of intervals and breaking up the data set on those intervals to average per value (Value1, Value2) to get:



 Depth Value1 Value2 Avg1_1 Avg2_1 Avg1_2 Avg2_2 
0 1.0 44 55 80.75 60.25 78.2 .
1 1.0 46 65 80.75 60.25 78.2 .
2 1.5 221 76 80.75 60.25 78.2 .
3 2.0 12 45 80.75 60.25 78.2
4 2.5 47 55 52.67 . 78.2
5 2.5 44 58 52.67 . 78.2
6 3.0 67 23 52.67 . 78.2
7 3.5 90 12 100.33 78.2
8 4.0 100 32 100.33 78.2
9 4.0 111 20 100.33 78.2
10 5.0 112 22 112 .
11 5.5 120 26 121 .
12 6.0 122 36 121 .


Where Avg1_ is the Average of Value1 over every interval of 1.0 (which includes (1.0 - 2.0, 2.5 - 3.0,....etc).



Is there an easy way to do this using groupby in a loop?










share|improve this question




















  • 1





    You can do with cut , but you need to show us the edge , like 1 ,2 both of them into first interval so [1,2] but next interval become (2,3]?

    – WeNYoBen
    Mar 28 at 22:24











  • The cuts would be on intervals (1.0, 2.0), (2.0, 3.0), (3.0, 4.0), (5.0, 6.0) for the interval 1.0 calculations.

    – HelloToEarth
    Mar 28 at 22:33











  • nope in your show case the boundary is contain within one interval like 1&2 are in same interval which is no trend for create by code

    – WeNYoBen
    Mar 28 at 22:35











  • And also 1.0 and 2.0 belong to one interval , why 5.0 5.5 and 6.0 not in the same interval

    – WeNYoBen
    Mar 28 at 22:43











  • My mistake. It would be (1.0, 2.0], (2.5, 3.0]....etc

    – HelloToEarth
    Mar 29 at 14:23

















2


















I have a dataframe with depth and other value columns:



data = 'Depth': [1.0, 1.0, 1.5, 2.0, 2.5, 2.5, 3.0, 3.5, 4.0, 4.0, 5.0, 5.5, 6.0], 
'Value1':[44, 46, 221, 12, 47, 44, 67, 90, 100, 111, 112, 120, 122],
'Value2': [55, 65, 76, 45, 55, 58, 23, 12, 32, 20, 22, 26, 36]

df = pd.DataFrame(data)


As you can see sometime there are repetitions in the Depth.



I'd like to be able to somehow groupby intervals and average over them.
For example an output I desire would be:



intervals = [1.0, 2.0]


Taking a list of intervals and breaking up the data set on those intervals to average per value (Value1, Value2) to get:



 Depth Value1 Value2 Avg1_1 Avg2_1 Avg1_2 Avg2_2 
0 1.0 44 55 80.75 60.25 78.2 .
1 1.0 46 65 80.75 60.25 78.2 .
2 1.5 221 76 80.75 60.25 78.2 .
3 2.0 12 45 80.75 60.25 78.2
4 2.5 47 55 52.67 . 78.2
5 2.5 44 58 52.67 . 78.2
6 3.0 67 23 52.67 . 78.2
7 3.5 90 12 100.33 78.2
8 4.0 100 32 100.33 78.2
9 4.0 111 20 100.33 78.2
10 5.0 112 22 112 .
11 5.5 120 26 121 .
12 6.0 122 36 121 .


Where Avg1_ is the Average of Value1 over every interval of 1.0 (which includes (1.0 - 2.0, 2.5 - 3.0,....etc).



Is there an easy way to do this using groupby in a loop?










share|improve this question




















  • 1





    You can do with cut , but you need to show us the edge , like 1 ,2 both of them into first interval so [1,2] but next interval become (2,3]?

    – WeNYoBen
    Mar 28 at 22:24











  • The cuts would be on intervals (1.0, 2.0), (2.0, 3.0), (3.0, 4.0), (5.0, 6.0) for the interval 1.0 calculations.

    – HelloToEarth
    Mar 28 at 22:33











  • nope in your show case the boundary is contain within one interval like 1&2 are in same interval which is no trend for create by code

    – WeNYoBen
    Mar 28 at 22:35











  • And also 1.0 and 2.0 belong to one interval , why 5.0 5.5 and 6.0 not in the same interval

    – WeNYoBen
    Mar 28 at 22:43











  • My mistake. It would be (1.0, 2.0], (2.5, 3.0]....etc

    – HelloToEarth
    Mar 29 at 14:23













2













2









2








I have a dataframe with depth and other value columns:



data = 'Depth': [1.0, 1.0, 1.5, 2.0, 2.5, 2.5, 3.0, 3.5, 4.0, 4.0, 5.0, 5.5, 6.0], 
'Value1':[44, 46, 221, 12, 47, 44, 67, 90, 100, 111, 112, 120, 122],
'Value2': [55, 65, 76, 45, 55, 58, 23, 12, 32, 20, 22, 26, 36]

df = pd.DataFrame(data)


As you can see sometime there are repetitions in the Depth.



I'd like to be able to somehow groupby intervals and average over them.
For example an output I desire would be:



intervals = [1.0, 2.0]


Taking a list of intervals and breaking up the data set on those intervals to average per value (Value1, Value2) to get:



 Depth Value1 Value2 Avg1_1 Avg2_1 Avg1_2 Avg2_2 
0 1.0 44 55 80.75 60.25 78.2 .
1 1.0 46 65 80.75 60.25 78.2 .
2 1.5 221 76 80.75 60.25 78.2 .
3 2.0 12 45 80.75 60.25 78.2
4 2.5 47 55 52.67 . 78.2
5 2.5 44 58 52.67 . 78.2
6 3.0 67 23 52.67 . 78.2
7 3.5 90 12 100.33 78.2
8 4.0 100 32 100.33 78.2
9 4.0 111 20 100.33 78.2
10 5.0 112 22 112 .
11 5.5 120 26 121 .
12 6.0 122 36 121 .


Where Avg1_ is the Average of Value1 over every interval of 1.0 (which includes (1.0 - 2.0, 2.5 - 3.0,....etc).



Is there an easy way to do this using groupby in a loop?










share|improve this question














I have a dataframe with depth and other value columns:



data = 'Depth': [1.0, 1.0, 1.5, 2.0, 2.5, 2.5, 3.0, 3.5, 4.0, 4.0, 5.0, 5.5, 6.0], 
'Value1':[44, 46, 221, 12, 47, 44, 67, 90, 100, 111, 112, 120, 122],
'Value2': [55, 65, 76, 45, 55, 58, 23, 12, 32, 20, 22, 26, 36]

df = pd.DataFrame(data)


As you can see sometime there are repetitions in the Depth.



I'd like to be able to somehow groupby intervals and average over them.
For example an output I desire would be:



intervals = [1.0, 2.0]


Taking a list of intervals and breaking up the data set on those intervals to average per value (Value1, Value2) to get:



 Depth Value1 Value2 Avg1_1 Avg2_1 Avg1_2 Avg2_2 
0 1.0 44 55 80.75 60.25 78.2 .
1 1.0 46 65 80.75 60.25 78.2 .
2 1.5 221 76 80.75 60.25 78.2 .
3 2.0 12 45 80.75 60.25 78.2
4 2.5 47 55 52.67 . 78.2
5 2.5 44 58 52.67 . 78.2
6 3.0 67 23 52.67 . 78.2
7 3.5 90 12 100.33 78.2
8 4.0 100 32 100.33 78.2
9 4.0 111 20 100.33 78.2
10 5.0 112 22 112 .
11 5.5 120 26 121 .
12 6.0 122 36 121 .


Where Avg1_ is the Average of Value1 over every interval of 1.0 (which includes (1.0 - 2.0, 2.5 - 3.0,....etc).



Is there an easy way to do this using groupby in a loop?







python python-3.x pandas dataframe






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Mar 28 at 22:10









HelloToEarthHelloToEarth

1,2531 gold badge5 silver badges19 bronze badges




1,2531 gold badge5 silver badges19 bronze badges










  • 1





    You can do with cut , but you need to show us the edge , like 1 ,2 both of them into first interval so [1,2] but next interval become (2,3]?

    – WeNYoBen
    Mar 28 at 22:24











  • The cuts would be on intervals (1.0, 2.0), (2.0, 3.0), (3.0, 4.0), (5.0, 6.0) for the interval 1.0 calculations.

    – HelloToEarth
    Mar 28 at 22:33











  • nope in your show case the boundary is contain within one interval like 1&2 are in same interval which is no trend for create by code

    – WeNYoBen
    Mar 28 at 22:35











  • And also 1.0 and 2.0 belong to one interval , why 5.0 5.5 and 6.0 not in the same interval

    – WeNYoBen
    Mar 28 at 22:43











  • My mistake. It would be (1.0, 2.0], (2.5, 3.0]....etc

    – HelloToEarth
    Mar 29 at 14:23












  • 1





    You can do with cut , but you need to show us the edge , like 1 ,2 both of them into first interval so [1,2] but next interval become (2,3]?

    – WeNYoBen
    Mar 28 at 22:24











  • The cuts would be on intervals (1.0, 2.0), (2.0, 3.0), (3.0, 4.0), (5.0, 6.0) for the interval 1.0 calculations.

    – HelloToEarth
    Mar 28 at 22:33











  • nope in your show case the boundary is contain within one interval like 1&2 are in same interval which is no trend for create by code

    – WeNYoBen
    Mar 28 at 22:35











  • And also 1.0 and 2.0 belong to one interval , why 5.0 5.5 and 6.0 not in the same interval

    – WeNYoBen
    Mar 28 at 22:43











  • My mistake. It would be (1.0, 2.0], (2.5, 3.0]....etc

    – HelloToEarth
    Mar 29 at 14:23







1




1





You can do with cut , but you need to show us the edge , like 1 ,2 both of them into first interval so [1,2] but next interval become (2,3]?

– WeNYoBen
Mar 28 at 22:24





You can do with cut , but you need to show us the edge , like 1 ,2 both of them into first interval so [1,2] but next interval become (2,3]?

– WeNYoBen
Mar 28 at 22:24













The cuts would be on intervals (1.0, 2.0), (2.0, 3.0), (3.0, 4.0), (5.0, 6.0) for the interval 1.0 calculations.

– HelloToEarth
Mar 28 at 22:33





The cuts would be on intervals (1.0, 2.0), (2.0, 3.0), (3.0, 4.0), (5.0, 6.0) for the interval 1.0 calculations.

– HelloToEarth
Mar 28 at 22:33













nope in your show case the boundary is contain within one interval like 1&2 are in same interval which is no trend for create by code

– WeNYoBen
Mar 28 at 22:35





nope in your show case the boundary is contain within one interval like 1&2 are in same interval which is no trend for create by code

– WeNYoBen
Mar 28 at 22:35













And also 1.0 and 2.0 belong to one interval , why 5.0 5.5 and 6.0 not in the same interval

– WeNYoBen
Mar 28 at 22:43





And also 1.0 and 2.0 belong to one interval , why 5.0 5.5 and 6.0 not in the same interval

– WeNYoBen
Mar 28 at 22:43













My mistake. It would be (1.0, 2.0], (2.5, 3.0]....etc

– HelloToEarth
Mar 29 at 14:23





My mistake. It would be (1.0, 2.0], (2.5, 3.0]....etc

– HelloToEarth
Mar 29 at 14:23












1 Answer
1






active

oldest

votes


















0



















You can accomplish this with the dataframe's apply method, and then sampling by boolean values the rows (and associated values) that meet the condition like depth + 1.0 or depth + 2.0.



df['avg1_1'] = df.apply(lambda x: (df[df['Depth'] <= x['Depth'] + 1.0]['Value1'].values.sum() / 
len(df[df['Depth'] <= x['Depth'] + 1.0]['Value1'].values)),
axis=1)

df['avg2_1'] = df.apply(lambda x: (df[df['Depth'] <= x['Depth'] + 1.0]['Value2'].values.sum() /
len(df[df['Depth'] <= x['Depth'] + 1.0]['Value2'].values)),
axis=1)

df['avg1_2'] = df.apply(lambda x: (df[df['Depth'] <= x['Depth'] + 2.0]['Value1'].values.sum() /
len(df[df['Depth'] <= x['Depth'] + 2.0]['Value1'].values)),
axis=1)

df['avg2_2'] = df.apply(lambda x: (df[df['Depth'] <= x['Depth'] + 2.0]['Value2'].values.sum() /
len(df[df['Depth'] <= x['Depth'] + 2.0]['Value2'].values)),
axis=1)


This would return:



Depth Value1 Value2 newval avg1_1 avg2_1 avg1_2 avg2_2
0 1.0 44 55 66.0 80.750000 60.250000 68.714286 53.857143
1 1.0 46 65 241.0 80.750000 60.250000 68.714286 53.857143
2 1.5 221 76 32.0 69.000000 59.000000 71.375000 48.625000
3 2.0 12 45 67.0 68.714286 53.857143 78.200000 44.100000
4 2.5 47 55 64.0 71.375000 48.625000 78.200000 44.100000
5 2.5 44 58 87.0 71.375000 48.625000 78.200000 44.100000
6 3.0 67 23 110.0 78.200000 44.100000 81.272727 42.090909
7 3.5 90 12 120.0 78.200000 44.100000 84.500000 40.750000
8 4.0 100 32 131.0 81.272727 42.090909 87.384615 40.384615
9 4.0 111 20 132.0 81.272727 42.090909 87.384615 40.384615
10 5.0 112 22 140.0 87.384615 40.384615 87.384615 40.384615
11 5.5 120 26 142.0 87.384615 40.384615 87.384615 40.384615
12 6.0 122 36 NaN 87.384615 40.384615 87.384615 40.384615





share|improve this answer



























    Your Answer






    StackExchange.ifUsing("editor", function ()
    StackExchange.using("externalEditor", function ()
    StackExchange.using("snippets", function ()
    StackExchange.snippets.init();
    );
    );
    , "code-snippets");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "1"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );














    draft saved

    draft discarded
















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55407624%2faveraging-column-values-by-several-intervals-in-python%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown


























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0



















    You can accomplish this with the dataframe's apply method, and then sampling by boolean values the rows (and associated values) that meet the condition like depth + 1.0 or depth + 2.0.



    df['avg1_1'] = df.apply(lambda x: (df[df['Depth'] <= x['Depth'] + 1.0]['Value1'].values.sum() / 
    len(df[df['Depth'] <= x['Depth'] + 1.0]['Value1'].values)),
    axis=1)

    df['avg2_1'] = df.apply(lambda x: (df[df['Depth'] <= x['Depth'] + 1.0]['Value2'].values.sum() /
    len(df[df['Depth'] <= x['Depth'] + 1.0]['Value2'].values)),
    axis=1)

    df['avg1_2'] = df.apply(lambda x: (df[df['Depth'] <= x['Depth'] + 2.0]['Value1'].values.sum() /
    len(df[df['Depth'] <= x['Depth'] + 2.0]['Value1'].values)),
    axis=1)

    df['avg2_2'] = df.apply(lambda x: (df[df['Depth'] <= x['Depth'] + 2.0]['Value2'].values.sum() /
    len(df[df['Depth'] <= x['Depth'] + 2.0]['Value2'].values)),
    axis=1)


    This would return:



    Depth Value1 Value2 newval avg1_1 avg2_1 avg1_2 avg2_2
    0 1.0 44 55 66.0 80.750000 60.250000 68.714286 53.857143
    1 1.0 46 65 241.0 80.750000 60.250000 68.714286 53.857143
    2 1.5 221 76 32.0 69.000000 59.000000 71.375000 48.625000
    3 2.0 12 45 67.0 68.714286 53.857143 78.200000 44.100000
    4 2.5 47 55 64.0 71.375000 48.625000 78.200000 44.100000
    5 2.5 44 58 87.0 71.375000 48.625000 78.200000 44.100000
    6 3.0 67 23 110.0 78.200000 44.100000 81.272727 42.090909
    7 3.5 90 12 120.0 78.200000 44.100000 84.500000 40.750000
    8 4.0 100 32 131.0 81.272727 42.090909 87.384615 40.384615
    9 4.0 111 20 132.0 81.272727 42.090909 87.384615 40.384615
    10 5.0 112 22 140.0 87.384615 40.384615 87.384615 40.384615
    11 5.5 120 26 142.0 87.384615 40.384615 87.384615 40.384615
    12 6.0 122 36 NaN 87.384615 40.384615 87.384615 40.384615





    share|improve this answer






























      0



















      You can accomplish this with the dataframe's apply method, and then sampling by boolean values the rows (and associated values) that meet the condition like depth + 1.0 or depth + 2.0.



      df['avg1_1'] = df.apply(lambda x: (df[df['Depth'] <= x['Depth'] + 1.0]['Value1'].values.sum() / 
      len(df[df['Depth'] <= x['Depth'] + 1.0]['Value1'].values)),
      axis=1)

      df['avg2_1'] = df.apply(lambda x: (df[df['Depth'] <= x['Depth'] + 1.0]['Value2'].values.sum() /
      len(df[df['Depth'] <= x['Depth'] + 1.0]['Value2'].values)),
      axis=1)

      df['avg1_2'] = df.apply(lambda x: (df[df['Depth'] <= x['Depth'] + 2.0]['Value1'].values.sum() /
      len(df[df['Depth'] <= x['Depth'] + 2.0]['Value1'].values)),
      axis=1)

      df['avg2_2'] = df.apply(lambda x: (df[df['Depth'] <= x['Depth'] + 2.0]['Value2'].values.sum() /
      len(df[df['Depth'] <= x['Depth'] + 2.0]['Value2'].values)),
      axis=1)


      This would return:



      Depth Value1 Value2 newval avg1_1 avg2_1 avg1_2 avg2_2
      0 1.0 44 55 66.0 80.750000 60.250000 68.714286 53.857143
      1 1.0 46 65 241.0 80.750000 60.250000 68.714286 53.857143
      2 1.5 221 76 32.0 69.000000 59.000000 71.375000 48.625000
      3 2.0 12 45 67.0 68.714286 53.857143 78.200000 44.100000
      4 2.5 47 55 64.0 71.375000 48.625000 78.200000 44.100000
      5 2.5 44 58 87.0 71.375000 48.625000 78.200000 44.100000
      6 3.0 67 23 110.0 78.200000 44.100000 81.272727 42.090909
      7 3.5 90 12 120.0 78.200000 44.100000 84.500000 40.750000
      8 4.0 100 32 131.0 81.272727 42.090909 87.384615 40.384615
      9 4.0 111 20 132.0 81.272727 42.090909 87.384615 40.384615
      10 5.0 112 22 140.0 87.384615 40.384615 87.384615 40.384615
      11 5.5 120 26 142.0 87.384615 40.384615 87.384615 40.384615
      12 6.0 122 36 NaN 87.384615 40.384615 87.384615 40.384615





      share|improve this answer




























        0















        0











        0









        You can accomplish this with the dataframe's apply method, and then sampling by boolean values the rows (and associated values) that meet the condition like depth + 1.0 or depth + 2.0.



        df['avg1_1'] = df.apply(lambda x: (df[df['Depth'] <= x['Depth'] + 1.0]['Value1'].values.sum() / 
        len(df[df['Depth'] <= x['Depth'] + 1.0]['Value1'].values)),
        axis=1)

        df['avg2_1'] = df.apply(lambda x: (df[df['Depth'] <= x['Depth'] + 1.0]['Value2'].values.sum() /
        len(df[df['Depth'] <= x['Depth'] + 1.0]['Value2'].values)),
        axis=1)

        df['avg1_2'] = df.apply(lambda x: (df[df['Depth'] <= x['Depth'] + 2.0]['Value1'].values.sum() /
        len(df[df['Depth'] <= x['Depth'] + 2.0]['Value1'].values)),
        axis=1)

        df['avg2_2'] = df.apply(lambda x: (df[df['Depth'] <= x['Depth'] + 2.0]['Value2'].values.sum() /
        len(df[df['Depth'] <= x['Depth'] + 2.0]['Value2'].values)),
        axis=1)


        This would return:



        Depth Value1 Value2 newval avg1_1 avg2_1 avg1_2 avg2_2
        0 1.0 44 55 66.0 80.750000 60.250000 68.714286 53.857143
        1 1.0 46 65 241.0 80.750000 60.250000 68.714286 53.857143
        2 1.5 221 76 32.0 69.000000 59.000000 71.375000 48.625000
        3 2.0 12 45 67.0 68.714286 53.857143 78.200000 44.100000
        4 2.5 47 55 64.0 71.375000 48.625000 78.200000 44.100000
        5 2.5 44 58 87.0 71.375000 48.625000 78.200000 44.100000
        6 3.0 67 23 110.0 78.200000 44.100000 81.272727 42.090909
        7 3.5 90 12 120.0 78.200000 44.100000 84.500000 40.750000
        8 4.0 100 32 131.0 81.272727 42.090909 87.384615 40.384615
        9 4.0 111 20 132.0 81.272727 42.090909 87.384615 40.384615
        10 5.0 112 22 140.0 87.384615 40.384615 87.384615 40.384615
        11 5.5 120 26 142.0 87.384615 40.384615 87.384615 40.384615
        12 6.0 122 36 NaN 87.384615 40.384615 87.384615 40.384615





        share|improve this answer














        You can accomplish this with the dataframe's apply method, and then sampling by boolean values the rows (and associated values) that meet the condition like depth + 1.0 or depth + 2.0.



        df['avg1_1'] = df.apply(lambda x: (df[df['Depth'] <= x['Depth'] + 1.0]['Value1'].values.sum() / 
        len(df[df['Depth'] <= x['Depth'] + 1.0]['Value1'].values)),
        axis=1)

        df['avg2_1'] = df.apply(lambda x: (df[df['Depth'] <= x['Depth'] + 1.0]['Value2'].values.sum() /
        len(df[df['Depth'] <= x['Depth'] + 1.0]['Value2'].values)),
        axis=1)

        df['avg1_2'] = df.apply(lambda x: (df[df['Depth'] <= x['Depth'] + 2.0]['Value1'].values.sum() /
        len(df[df['Depth'] <= x['Depth'] + 2.0]['Value1'].values)),
        axis=1)

        df['avg2_2'] = df.apply(lambda x: (df[df['Depth'] <= x['Depth'] + 2.0]['Value2'].values.sum() /
        len(df[df['Depth'] <= x['Depth'] + 2.0]['Value2'].values)),
        axis=1)


        This would return:



        Depth Value1 Value2 newval avg1_1 avg2_1 avg1_2 avg2_2
        0 1.0 44 55 66.0 80.750000 60.250000 68.714286 53.857143
        1 1.0 46 65 241.0 80.750000 60.250000 68.714286 53.857143
        2 1.5 221 76 32.0 69.000000 59.000000 71.375000 48.625000
        3 2.0 12 45 67.0 68.714286 53.857143 78.200000 44.100000
        4 2.5 47 55 64.0 71.375000 48.625000 78.200000 44.100000
        5 2.5 44 58 87.0 71.375000 48.625000 78.200000 44.100000
        6 3.0 67 23 110.0 78.200000 44.100000 81.272727 42.090909
        7 3.5 90 12 120.0 78.200000 44.100000 84.500000 40.750000
        8 4.0 100 32 131.0 81.272727 42.090909 87.384615 40.384615
        9 4.0 111 20 132.0 81.272727 42.090909 87.384615 40.384615
        10 5.0 112 22 140.0 87.384615 40.384615 87.384615 40.384615
        11 5.5 120 26 142.0 87.384615 40.384615 87.384615 40.384615
        12 6.0 122 36 NaN 87.384615 40.384615 87.384615 40.384615






        share|improve this answer













        share|improve this answer




        share|improve this answer










        answered Mar 28 at 22:39









        AlecZAlecZ

        2053 silver badges6 bronze badges




        2053 silver badges6 bronze badges

































            draft saved

            draft discarded















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55407624%2faveraging-column-values-by-several-intervals-in-python%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown









            Popular posts from this blog

            Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

            SQL error code 1064 with creating Laravel foreign keysForeign key constraints: When to use ON UPDATE and ON DELETEDropping column with foreign key Laravel error: General error: 1025 Error on renameLaravel SQL Can't create tableLaravel Migration foreign key errorLaravel php artisan migrate:refresh giving a syntax errorSQLSTATE[42S01]: Base table or view already exists or Base table or view already exists: 1050 Tableerror in migrating laravel file to xampp serverSyntax error or access violation: 1064:syntax to use near 'unsigned not null, modelName varchar(191) not null, title varchar(191) not nLaravel cannot create new table field in mysqlLaravel 5.7:Last migration creates table but is not registered in the migration table

            은진 송씨 목차 역사 본관 분파 인물 조선 왕실과의 인척 관계 집성촌 항렬자 인구 같이 보기 각주 둘러보기 메뉴은진 송씨세종실록 149권, 지리지 충청도 공주목 은진현