I want to understand why lapply exhausts memory but a for loop doesn'tWhy does python use 'else' after for and while loops?Parallel Processing in R using “parallel” packageCombining loops and lapplyusing loops or lapply in RR trying to create start and stop times from single columnHow can I aggregate close time events in RHow to append a growing array to itself efficientlyRemove duplicate words from cells in RR:Trying to understand the logic in order to replace loops with lapply()Create column based on multiple conditions in r
My Guitar came with both metal and nylon strings, what replacement strings should I buy?
Split telescope into two eyes
Decrypting Multi-Prime RSA with e, N, and factors of N given
Symbolise polygon outline where it doesn't coincide with other feature using geometry generator in QGIS?
I am often given, occasionally stolen, rarely sold, and never borrowed
How can a "proper" function have a vertical slope?
Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointers to individual numbers?
What does "stirring tanks" mean?
How to respond to "Why didn't you do a postdoc after your PhD?"
Front hydraulic disk brake is too powerful on MTB — solutions?
How could a steampunk zeppelin navigate space?
Why does Principal Vagina say, "no relation" after introducing himself?
Why is Trump releasing or not of his taxes such a big deal?
Does a restocking fee still qualify as a business expense?
First author doesn't want a co-author to read the whole paper
Would preaching in a church be advantageous for becoming a lecturer?
Find number 8 with the least number of tries
What could possibly power an Alcubierre drive?
How do you handle simultaneous damage when one type is absorbed and not the other?
Which culture used no personal names?
Code Golf Measurer © 2019
Are dead worlds a good galactic barrier?
Does my protagonist need to be the most important character?
When did MCMC become commonplace?
I want to understand why lapply exhausts memory but a for loop doesn't
Why does python use 'else' after for and while loops?Parallel Processing in R using “parallel” packageCombining loops and lapplyusing loops or lapply in RR trying to create start and stop times from single columnHow can I aggregate close time events in RHow to append a growing array to itself efficientlyRemove duplicate words from cells in RR:Trying to understand the logic in order to replace loops with lapply()Create column based on multiple conditions in r
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty
margin-bottom:0;
I am working in R and trying to understand the best way to join data frames when one of them is very large.
I have a data frame which is not excruciatingly large but also not small (~80K observations of 8 variables, 144 MB). I need to match observations from this data frame to observations from another smaller data frame on the basis of a date range. Specifically, I have:
events.df <- data.frame(individual=c('A','B','C','A','B','C'),
event=c(1,1,1,2,2,2),
time=as.POSIXct(c('2014-01-01 08:00:00','2014-01-05 13:00:00','2014-01-10 07:00:00','2014-05-01 01:00:00','2014-06-01 12:00:00','2014-08-01 10:00:00'),format="%Y-%m-%d %H:%M:%S"))
trips.df <- data.frame(individual=c('A','B','C'),trip=c('x1A','CA1B','XX78'),
trip_start = as.POSIXct(c('2014-01-01 06:00:00','2014-01-04 03:00:00','2014-01-08 12:00:00'),format="%Y-%m-%d %H:%M:%S"),
trip_end=as.POSIXct(c('2014-01-03 06:00:00','2014-01-06 03:00:00','2014-01-11 12:00:00'),format="%Y-%m-%d %H:%M:%S"))
In my case events.df contains around 80,000 unique events and I am looking to match them to events from the trips.df data frame, which has around 200 unique trips. Each trip has a unique trip identifier ('trip'). I would like to match based on whether the event took place during the date range defining a trip.
First, I have tried fuzzy_inner_join from the fuzzyjoin library. It works great in principal:
fuzzy_inner_join(events.df,trips.df,by=c('individual'='individual','time'='trip_start','time'='trip_end'),match_fun=list(`==`,`>=`,`<=`))
individual.x event time individual.y trip trip_start trip_end
1 A 1 2014-01-01 08:00:00 A x1A 2014-01-01 06:00:00 2014-01-03 06:00:00
2 B 1 2014-01-05 13:00:00 B CA1B 2014-01-04 03:00:00 2014-01-06 03:00:00
3 C 1 2014-01-10 07:00:00 C XX78 2014-01-08 12:00:00 2014-01-11 12:00:00
>
but runs out of memory when I try to apply it to the larger data frames.
Here is a second solution I cobbled together:
trip.match <- function(tripid)
individual <- trips.df$individual[trips$trip==tripid]
start <- trips.df$trip_start[trips$trip==tripid]
end <- trips.df$trip_end[trips$trip==tripid]
tmp <- events.df[events.df$individual==individual &
events.df$time>= start &
events.df$time<= end,]
tmp$trip <- tripid
return(tmp)
result <- data.frame(rbindlist(lapply(unique(trips.df$trip),trip.match)
This solution also breaks down because the list object returned by lapply is 25GB and the attempt to cast this list to a data frame also exhausts the available memory.
I have been able to do what I need to do using a for loop. Basically, I append a column onto events.df and loop through the unique trip identifiers and populate the new column in events.df accordingly:
events.df$trip <- NA
for(i in unique(trips.df$trip))
individual <- trips.df$individual[trips.df$trip==i]
start <- min(trips.df$trip_start[trips.df$trip==i])
end <- max(trips.df$trip_end[trips.df$trip==i])
events.df$trip[events.df$individual==individual & events.df$time >= start & events.df$time <= end] <- i
> events.df
individual event time trip
1 A 1 2014-01-01 08:00:00 x1A
2 B 1 2014-01-05 13:00:00 CA1B
3 C 1 2014-01-10 07:00:00 XX78
4 A 2 2014-05-01 01:00:00 <NA>
5 B 2 2014-06-01 12:00:00 <NA>
6 C 2 2014-08-01 10:00:00 <NA>
My question is this: I'm not a very advanced R programmer so I expect there is a more memory efficient way to accomplish what I'm trying to do. Is there?
r for-loop lapply
add a comment
|
I am working in R and trying to understand the best way to join data frames when one of them is very large.
I have a data frame which is not excruciatingly large but also not small (~80K observations of 8 variables, 144 MB). I need to match observations from this data frame to observations from another smaller data frame on the basis of a date range. Specifically, I have:
events.df <- data.frame(individual=c('A','B','C','A','B','C'),
event=c(1,1,1,2,2,2),
time=as.POSIXct(c('2014-01-01 08:00:00','2014-01-05 13:00:00','2014-01-10 07:00:00','2014-05-01 01:00:00','2014-06-01 12:00:00','2014-08-01 10:00:00'),format="%Y-%m-%d %H:%M:%S"))
trips.df <- data.frame(individual=c('A','B','C'),trip=c('x1A','CA1B','XX78'),
trip_start = as.POSIXct(c('2014-01-01 06:00:00','2014-01-04 03:00:00','2014-01-08 12:00:00'),format="%Y-%m-%d %H:%M:%S"),
trip_end=as.POSIXct(c('2014-01-03 06:00:00','2014-01-06 03:00:00','2014-01-11 12:00:00'),format="%Y-%m-%d %H:%M:%S"))
In my case events.df contains around 80,000 unique events and I am looking to match them to events from the trips.df data frame, which has around 200 unique trips. Each trip has a unique trip identifier ('trip'). I would like to match based on whether the event took place during the date range defining a trip.
First, I have tried fuzzy_inner_join from the fuzzyjoin library. It works great in principal:
fuzzy_inner_join(events.df,trips.df,by=c('individual'='individual','time'='trip_start','time'='trip_end'),match_fun=list(`==`,`>=`,`<=`))
individual.x event time individual.y trip trip_start trip_end
1 A 1 2014-01-01 08:00:00 A x1A 2014-01-01 06:00:00 2014-01-03 06:00:00
2 B 1 2014-01-05 13:00:00 B CA1B 2014-01-04 03:00:00 2014-01-06 03:00:00
3 C 1 2014-01-10 07:00:00 C XX78 2014-01-08 12:00:00 2014-01-11 12:00:00
>
but runs out of memory when I try to apply it to the larger data frames.
Here is a second solution I cobbled together:
trip.match <- function(tripid)
individual <- trips.df$individual[trips$trip==tripid]
start <- trips.df$trip_start[trips$trip==tripid]
end <- trips.df$trip_end[trips$trip==tripid]
tmp <- events.df[events.df$individual==individual &
events.df$time>= start &
events.df$time<= end,]
tmp$trip <- tripid
return(tmp)
result <- data.frame(rbindlist(lapply(unique(trips.df$trip),trip.match)
This solution also breaks down because the list object returned by lapply is 25GB and the attempt to cast this list to a data frame also exhausts the available memory.
I have been able to do what I need to do using a for loop. Basically, I append a column onto events.df and loop through the unique trip identifiers and populate the new column in events.df accordingly:
events.df$trip <- NA
for(i in unique(trips.df$trip))
individual <- trips.df$individual[trips.df$trip==i]
start <- min(trips.df$trip_start[trips.df$trip==i])
end <- max(trips.df$trip_end[trips.df$trip==i])
events.df$trip[events.df$individual==individual & events.df$time >= start & events.df$time <= end] <- i
> events.df
individual event time trip
1 A 1 2014-01-01 08:00:00 x1A
2 B 1 2014-01-05 13:00:00 CA1B
3 C 1 2014-01-10 07:00:00 XX78
4 A 2 2014-05-01 01:00:00 <NA>
5 B 2 2014-06-01 12:00:00 <NA>
6 C 2 2014-08-01 10:00:00 <NA>
My question is this: I'm not a very advanced R programmer so I expect there is a more memory efficient way to accomplish what I'm trying to do. Is there?
r for-loop lapply
@Parfait, in the 3rd code chunk above you'll seeresult <- data.frame(rbindlist(lapply(unique(trips.df$trip),trip.match)
...the lapply() is wrapped inside some code to cast the resulting list to a data frame.
– aaronmams
Mar 29 at 15:52
Your fuzzjoin does not usetripid
.
– Parfait
Mar 29 at 17:02
@Parfait, yes, the fuzzy join does not join on the tripid. The idea is to attach the tripid to each event. The fuzzy join works on the individual and the time range to attach the tripid to the row for any corresponding event.
– aaronmams
Mar 29 at 17:38
add a comment
|
I am working in R and trying to understand the best way to join data frames when one of them is very large.
I have a data frame which is not excruciatingly large but also not small (~80K observations of 8 variables, 144 MB). I need to match observations from this data frame to observations from another smaller data frame on the basis of a date range. Specifically, I have:
events.df <- data.frame(individual=c('A','B','C','A','B','C'),
event=c(1,1,1,2,2,2),
time=as.POSIXct(c('2014-01-01 08:00:00','2014-01-05 13:00:00','2014-01-10 07:00:00','2014-05-01 01:00:00','2014-06-01 12:00:00','2014-08-01 10:00:00'),format="%Y-%m-%d %H:%M:%S"))
trips.df <- data.frame(individual=c('A','B','C'),trip=c('x1A','CA1B','XX78'),
trip_start = as.POSIXct(c('2014-01-01 06:00:00','2014-01-04 03:00:00','2014-01-08 12:00:00'),format="%Y-%m-%d %H:%M:%S"),
trip_end=as.POSIXct(c('2014-01-03 06:00:00','2014-01-06 03:00:00','2014-01-11 12:00:00'),format="%Y-%m-%d %H:%M:%S"))
In my case events.df contains around 80,000 unique events and I am looking to match them to events from the trips.df data frame, which has around 200 unique trips. Each trip has a unique trip identifier ('trip'). I would like to match based on whether the event took place during the date range defining a trip.
First, I have tried fuzzy_inner_join from the fuzzyjoin library. It works great in principal:
fuzzy_inner_join(events.df,trips.df,by=c('individual'='individual','time'='trip_start','time'='trip_end'),match_fun=list(`==`,`>=`,`<=`))
individual.x event time individual.y trip trip_start trip_end
1 A 1 2014-01-01 08:00:00 A x1A 2014-01-01 06:00:00 2014-01-03 06:00:00
2 B 1 2014-01-05 13:00:00 B CA1B 2014-01-04 03:00:00 2014-01-06 03:00:00
3 C 1 2014-01-10 07:00:00 C XX78 2014-01-08 12:00:00 2014-01-11 12:00:00
>
but runs out of memory when I try to apply it to the larger data frames.
Here is a second solution I cobbled together:
trip.match <- function(tripid)
individual <- trips.df$individual[trips$trip==tripid]
start <- trips.df$trip_start[trips$trip==tripid]
end <- trips.df$trip_end[trips$trip==tripid]
tmp <- events.df[events.df$individual==individual &
events.df$time>= start &
events.df$time<= end,]
tmp$trip <- tripid
return(tmp)
result <- data.frame(rbindlist(lapply(unique(trips.df$trip),trip.match)
This solution also breaks down because the list object returned by lapply is 25GB and the attempt to cast this list to a data frame also exhausts the available memory.
I have been able to do what I need to do using a for loop. Basically, I append a column onto events.df and loop through the unique trip identifiers and populate the new column in events.df accordingly:
events.df$trip <- NA
for(i in unique(trips.df$trip))
individual <- trips.df$individual[trips.df$trip==i]
start <- min(trips.df$trip_start[trips.df$trip==i])
end <- max(trips.df$trip_end[trips.df$trip==i])
events.df$trip[events.df$individual==individual & events.df$time >= start & events.df$time <= end] <- i
> events.df
individual event time trip
1 A 1 2014-01-01 08:00:00 x1A
2 B 1 2014-01-05 13:00:00 CA1B
3 C 1 2014-01-10 07:00:00 XX78
4 A 2 2014-05-01 01:00:00 <NA>
5 B 2 2014-06-01 12:00:00 <NA>
6 C 2 2014-08-01 10:00:00 <NA>
My question is this: I'm not a very advanced R programmer so I expect there is a more memory efficient way to accomplish what I'm trying to do. Is there?
r for-loop lapply
I am working in R and trying to understand the best way to join data frames when one of them is very large.
I have a data frame which is not excruciatingly large but also not small (~80K observations of 8 variables, 144 MB). I need to match observations from this data frame to observations from another smaller data frame on the basis of a date range. Specifically, I have:
events.df <- data.frame(individual=c('A','B','C','A','B','C'),
event=c(1,1,1,2,2,2),
time=as.POSIXct(c('2014-01-01 08:00:00','2014-01-05 13:00:00','2014-01-10 07:00:00','2014-05-01 01:00:00','2014-06-01 12:00:00','2014-08-01 10:00:00'),format="%Y-%m-%d %H:%M:%S"))
trips.df <- data.frame(individual=c('A','B','C'),trip=c('x1A','CA1B','XX78'),
trip_start = as.POSIXct(c('2014-01-01 06:00:00','2014-01-04 03:00:00','2014-01-08 12:00:00'),format="%Y-%m-%d %H:%M:%S"),
trip_end=as.POSIXct(c('2014-01-03 06:00:00','2014-01-06 03:00:00','2014-01-11 12:00:00'),format="%Y-%m-%d %H:%M:%S"))
In my case events.df contains around 80,000 unique events and I am looking to match them to events from the trips.df data frame, which has around 200 unique trips. Each trip has a unique trip identifier ('trip'). I would like to match based on whether the event took place during the date range defining a trip.
First, I have tried fuzzy_inner_join from the fuzzyjoin library. It works great in principal:
fuzzy_inner_join(events.df,trips.df,by=c('individual'='individual','time'='trip_start','time'='trip_end'),match_fun=list(`==`,`>=`,`<=`))
individual.x event time individual.y trip trip_start trip_end
1 A 1 2014-01-01 08:00:00 A x1A 2014-01-01 06:00:00 2014-01-03 06:00:00
2 B 1 2014-01-05 13:00:00 B CA1B 2014-01-04 03:00:00 2014-01-06 03:00:00
3 C 1 2014-01-10 07:00:00 C XX78 2014-01-08 12:00:00 2014-01-11 12:00:00
>
but runs out of memory when I try to apply it to the larger data frames.
Here is a second solution I cobbled together:
trip.match <- function(tripid)
individual <- trips.df$individual[trips$trip==tripid]
start <- trips.df$trip_start[trips$trip==tripid]
end <- trips.df$trip_end[trips$trip==tripid]
tmp <- events.df[events.df$individual==individual &
events.df$time>= start &
events.df$time<= end,]
tmp$trip <- tripid
return(tmp)
result <- data.frame(rbindlist(lapply(unique(trips.df$trip),trip.match)
This solution also breaks down because the list object returned by lapply is 25GB and the attempt to cast this list to a data frame also exhausts the available memory.
I have been able to do what I need to do using a for loop. Basically, I append a column onto events.df and loop through the unique trip identifiers and populate the new column in events.df accordingly:
events.df$trip <- NA
for(i in unique(trips.df$trip))
individual <- trips.df$individual[trips.df$trip==i]
start <- min(trips.df$trip_start[trips.df$trip==i])
end <- max(trips.df$trip_end[trips.df$trip==i])
events.df$trip[events.df$individual==individual & events.df$time >= start & events.df$time <= end] <- i
> events.df
individual event time trip
1 A 1 2014-01-01 08:00:00 x1A
2 B 1 2014-01-05 13:00:00 CA1B
3 C 1 2014-01-10 07:00:00 XX78
4 A 2 2014-05-01 01:00:00 <NA>
5 B 2 2014-06-01 12:00:00 <NA>
6 C 2 2014-08-01 10:00:00 <NA>
My question is this: I'm not a very advanced R programmer so I expect there is a more memory efficient way to accomplish what I'm trying to do. Is there?
r for-loop lapply
r for-loop lapply
edited Mar 28 at 21:30
aaronmams
asked Mar 28 at 21:21
aaronmamsaaronmams
698 bronze badges
698 bronze badges
@Parfait, in the 3rd code chunk above you'll seeresult <- data.frame(rbindlist(lapply(unique(trips.df$trip),trip.match)
...the lapply() is wrapped inside some code to cast the resulting list to a data frame.
– aaronmams
Mar 29 at 15:52
Your fuzzjoin does not usetripid
.
– Parfait
Mar 29 at 17:02
@Parfait, yes, the fuzzy join does not join on the tripid. The idea is to attach the tripid to each event. The fuzzy join works on the individual and the time range to attach the tripid to the row for any corresponding event.
– aaronmams
Mar 29 at 17:38
add a comment
|
@Parfait, in the 3rd code chunk above you'll seeresult <- data.frame(rbindlist(lapply(unique(trips.df$trip),trip.match)
...the lapply() is wrapped inside some code to cast the resulting list to a data frame.
– aaronmams
Mar 29 at 15:52
Your fuzzjoin does not usetripid
.
– Parfait
Mar 29 at 17:02
@Parfait, yes, the fuzzy join does not join on the tripid. The idea is to attach the tripid to each event. The fuzzy join works on the individual and the time range to attach the tripid to the row for any corresponding event.
– aaronmams
Mar 29 at 17:38
@Parfait, in the 3rd code chunk above you'll see
result <- data.frame(rbindlist(lapply(unique(trips.df$trip),trip.match)
...the lapply() is wrapped inside some code to cast the resulting list to a data frame.– aaronmams
Mar 29 at 15:52
@Parfait, in the 3rd code chunk above you'll see
result <- data.frame(rbindlist(lapply(unique(trips.df$trip),trip.match)
...the lapply() is wrapped inside some code to cast the resulting list to a data frame.– aaronmams
Mar 29 at 15:52
Your fuzzjoin does not use
tripid
.– Parfait
Mar 29 at 17:02
Your fuzzjoin does not use
tripid
.– Parfait
Mar 29 at 17:02
@Parfait, yes, the fuzzy join does not join on the tripid. The idea is to attach the tripid to each event. The fuzzy join works on the individual and the time range to attach the tripid to the row for any corresponding event.
– aaronmams
Mar 29 at 17:38
@Parfait, yes, the fuzzy join does not join on the tripid. The idea is to attach the tripid to each event. The fuzzy join works on the individual and the time range to attach the tripid to the row for any corresponding event.
– aaronmams
Mar 29 at 17:38
add a comment
|
2 Answers
2
active
oldest
votes
Try creating a table that expands the trip ranges by hour and then merge with the event. Here is an example (using the data.table
function because data.table
outperforms data.frame
for larger datasets):
library('data.table')
tripsV <- unique(trips.df$trip)
tripExpand <- function(t)
dateV <- seq(trips.df$trip_start[trips.df$trip == t],
trips.df$trip_end[trips.df$trip == t],
by = 'hour')
data.table(trip = t, time = dateV)
trips.dt <- rbindlist(
lapply(tripsV, function(t) tripExpand(t))
)
merge(events.df,
trips.dt,
by = 'time')
Output:
time individual event trip
1 2014-01-01 08:00:00 A 1 x1A
2 2014-01-05 13:00:00 B 1 CA1B
3 2014-01-10 07:00:00 C 1 XX78
So you are basically translating the trip table to trip-hour long-form panel dataset. That makes for easy merging with the event dataset. I haven't benchmarked it to your current method but my hunch is that it will be more memory & cpu efficient.
thanks, your solution works well for the case that I posted. I didn't do a great job of expressing the general nature of my problem. Your solution does break down if I have an event occurring at say2014-01-01 08:31:00
. In that case, I can use your framework and just expand the trips data set by minute. In practice this is exactly what I have done and I can confirm that expanding the trips data set and merging with data.table() is substantially faster than usingfuzzy_inner_join()
.
– aaronmams
Mar 29 at 20:40
1
Great-- I'm happy it worked! As far as the minute observations go, glad that you figured out a work-around; another solution would be to round the event times to the hour prior to the merge withtrunc.POSIXt(time, 'hours')
– Andrew Royal
Mar 29 at 21:57
add a comment
|
Consider splitting your data with data.table's split
and run each subset on fuzzy_inner_join
then call rbindlist
to bind all data frame elements together for single output.
df_list <- data.table::split(events.df, by="individual")
fuzzy_list <- lapply(df_list, function(sub.df)
fuzzy_inner_join(sub.df, trips.df,
by = c('individual'='individual', 'time'='trip_start', 'time'='trip_end'),
match_fun = list(`==`,`>=`,`<=`)
)
)
# REMOVE TEMP OBJECT AND CALL GARBAGE COLLECTOR
rm(df_list); gc()
final_df <- rbindlist(fuzzy_list)
# REMOVE TEMP OBJECT AND CALL GARBAGE COLLECTOR
rm(fuzzy_list); gc()
thanks. I have tried implementing a segmented approach as you suggested. Informally, I've found that the fuzzy join works very slowly even on data frames including ~10K events. Splitting the data will eventually get me what I need...but I guess I was looking for something faster. At any rate, your suggestion is valuable so I upvoted your answer. The answer provided by Andrew Royal is a bit more direct and faster so I'm accepting that one.
– aaronmams
Mar 29 at 17:42
add a comment
|
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55407040%2fi-want-to-understand-why-lapply-exhausts-memory-but-a-for-loop-doesnt%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
Try creating a table that expands the trip ranges by hour and then merge with the event. Here is an example (using the data.table
function because data.table
outperforms data.frame
for larger datasets):
library('data.table')
tripsV <- unique(trips.df$trip)
tripExpand <- function(t)
dateV <- seq(trips.df$trip_start[trips.df$trip == t],
trips.df$trip_end[trips.df$trip == t],
by = 'hour')
data.table(trip = t, time = dateV)
trips.dt <- rbindlist(
lapply(tripsV, function(t) tripExpand(t))
)
merge(events.df,
trips.dt,
by = 'time')
Output:
time individual event trip
1 2014-01-01 08:00:00 A 1 x1A
2 2014-01-05 13:00:00 B 1 CA1B
3 2014-01-10 07:00:00 C 1 XX78
So you are basically translating the trip table to trip-hour long-form panel dataset. That makes for easy merging with the event dataset. I haven't benchmarked it to your current method but my hunch is that it will be more memory & cpu efficient.
thanks, your solution works well for the case that I posted. I didn't do a great job of expressing the general nature of my problem. Your solution does break down if I have an event occurring at say2014-01-01 08:31:00
. In that case, I can use your framework and just expand the trips data set by minute. In practice this is exactly what I have done and I can confirm that expanding the trips data set and merging with data.table() is substantially faster than usingfuzzy_inner_join()
.
– aaronmams
Mar 29 at 20:40
1
Great-- I'm happy it worked! As far as the minute observations go, glad that you figured out a work-around; another solution would be to round the event times to the hour prior to the merge withtrunc.POSIXt(time, 'hours')
– Andrew Royal
Mar 29 at 21:57
add a comment
|
Try creating a table that expands the trip ranges by hour and then merge with the event. Here is an example (using the data.table
function because data.table
outperforms data.frame
for larger datasets):
library('data.table')
tripsV <- unique(trips.df$trip)
tripExpand <- function(t)
dateV <- seq(trips.df$trip_start[trips.df$trip == t],
trips.df$trip_end[trips.df$trip == t],
by = 'hour')
data.table(trip = t, time = dateV)
trips.dt <- rbindlist(
lapply(tripsV, function(t) tripExpand(t))
)
merge(events.df,
trips.dt,
by = 'time')
Output:
time individual event trip
1 2014-01-01 08:00:00 A 1 x1A
2 2014-01-05 13:00:00 B 1 CA1B
3 2014-01-10 07:00:00 C 1 XX78
So you are basically translating the trip table to trip-hour long-form panel dataset. That makes for easy merging with the event dataset. I haven't benchmarked it to your current method but my hunch is that it will be more memory & cpu efficient.
thanks, your solution works well for the case that I posted. I didn't do a great job of expressing the general nature of my problem. Your solution does break down if I have an event occurring at say2014-01-01 08:31:00
. In that case, I can use your framework and just expand the trips data set by minute. In practice this is exactly what I have done and I can confirm that expanding the trips data set and merging with data.table() is substantially faster than usingfuzzy_inner_join()
.
– aaronmams
Mar 29 at 20:40
1
Great-- I'm happy it worked! As far as the minute observations go, glad that you figured out a work-around; another solution would be to round the event times to the hour prior to the merge withtrunc.POSIXt(time, 'hours')
– Andrew Royal
Mar 29 at 21:57
add a comment
|
Try creating a table that expands the trip ranges by hour and then merge with the event. Here is an example (using the data.table
function because data.table
outperforms data.frame
for larger datasets):
library('data.table')
tripsV <- unique(trips.df$trip)
tripExpand <- function(t)
dateV <- seq(trips.df$trip_start[trips.df$trip == t],
trips.df$trip_end[trips.df$trip == t],
by = 'hour')
data.table(trip = t, time = dateV)
trips.dt <- rbindlist(
lapply(tripsV, function(t) tripExpand(t))
)
merge(events.df,
trips.dt,
by = 'time')
Output:
time individual event trip
1 2014-01-01 08:00:00 A 1 x1A
2 2014-01-05 13:00:00 B 1 CA1B
3 2014-01-10 07:00:00 C 1 XX78
So you are basically translating the trip table to trip-hour long-form panel dataset. That makes for easy merging with the event dataset. I haven't benchmarked it to your current method but my hunch is that it will be more memory & cpu efficient.
Try creating a table that expands the trip ranges by hour and then merge with the event. Here is an example (using the data.table
function because data.table
outperforms data.frame
for larger datasets):
library('data.table')
tripsV <- unique(trips.df$trip)
tripExpand <- function(t)
dateV <- seq(trips.df$trip_start[trips.df$trip == t],
trips.df$trip_end[trips.df$trip == t],
by = 'hour')
data.table(trip = t, time = dateV)
trips.dt <- rbindlist(
lapply(tripsV, function(t) tripExpand(t))
)
merge(events.df,
trips.dt,
by = 'time')
Output:
time individual event trip
1 2014-01-01 08:00:00 A 1 x1A
2 2014-01-05 13:00:00 B 1 CA1B
3 2014-01-10 07:00:00 C 1 XX78
So you are basically translating the trip table to trip-hour long-form panel dataset. That makes for easy merging with the event dataset. I haven't benchmarked it to your current method but my hunch is that it will be more memory & cpu efficient.
answered Mar 29 at 1:55
Andrew RoyalAndrew Royal
2961 silver badge5 bronze badges
2961 silver badge5 bronze badges
thanks, your solution works well for the case that I posted. I didn't do a great job of expressing the general nature of my problem. Your solution does break down if I have an event occurring at say2014-01-01 08:31:00
. In that case, I can use your framework and just expand the trips data set by minute. In practice this is exactly what I have done and I can confirm that expanding the trips data set and merging with data.table() is substantially faster than usingfuzzy_inner_join()
.
– aaronmams
Mar 29 at 20:40
1
Great-- I'm happy it worked! As far as the minute observations go, glad that you figured out a work-around; another solution would be to round the event times to the hour prior to the merge withtrunc.POSIXt(time, 'hours')
– Andrew Royal
Mar 29 at 21:57
add a comment
|
thanks, your solution works well for the case that I posted. I didn't do a great job of expressing the general nature of my problem. Your solution does break down if I have an event occurring at say2014-01-01 08:31:00
. In that case, I can use your framework and just expand the trips data set by minute. In practice this is exactly what I have done and I can confirm that expanding the trips data set and merging with data.table() is substantially faster than usingfuzzy_inner_join()
.
– aaronmams
Mar 29 at 20:40
1
Great-- I'm happy it worked! As far as the minute observations go, glad that you figured out a work-around; another solution would be to round the event times to the hour prior to the merge withtrunc.POSIXt(time, 'hours')
– Andrew Royal
Mar 29 at 21:57
thanks, your solution works well for the case that I posted. I didn't do a great job of expressing the general nature of my problem. Your solution does break down if I have an event occurring at say
2014-01-01 08:31:00
. In that case, I can use your framework and just expand the trips data set by minute. In practice this is exactly what I have done and I can confirm that expanding the trips data set and merging with data.table() is substantially faster than using fuzzy_inner_join()
.– aaronmams
Mar 29 at 20:40
thanks, your solution works well for the case that I posted. I didn't do a great job of expressing the general nature of my problem. Your solution does break down if I have an event occurring at say
2014-01-01 08:31:00
. In that case, I can use your framework and just expand the trips data set by minute. In practice this is exactly what I have done and I can confirm that expanding the trips data set and merging with data.table() is substantially faster than using fuzzy_inner_join()
.– aaronmams
Mar 29 at 20:40
1
1
Great-- I'm happy it worked! As far as the minute observations go, glad that you figured out a work-around; another solution would be to round the event times to the hour prior to the merge with
trunc.POSIXt(time, 'hours')
– Andrew Royal
Mar 29 at 21:57
Great-- I'm happy it worked! As far as the minute observations go, glad that you figured out a work-around; another solution would be to round the event times to the hour prior to the merge with
trunc.POSIXt(time, 'hours')
– Andrew Royal
Mar 29 at 21:57
add a comment
|
Consider splitting your data with data.table's split
and run each subset on fuzzy_inner_join
then call rbindlist
to bind all data frame elements together for single output.
df_list <- data.table::split(events.df, by="individual")
fuzzy_list <- lapply(df_list, function(sub.df)
fuzzy_inner_join(sub.df, trips.df,
by = c('individual'='individual', 'time'='trip_start', 'time'='trip_end'),
match_fun = list(`==`,`>=`,`<=`)
)
)
# REMOVE TEMP OBJECT AND CALL GARBAGE COLLECTOR
rm(df_list); gc()
final_df <- rbindlist(fuzzy_list)
# REMOVE TEMP OBJECT AND CALL GARBAGE COLLECTOR
rm(fuzzy_list); gc()
thanks. I have tried implementing a segmented approach as you suggested. Informally, I've found that the fuzzy join works very slowly even on data frames including ~10K events. Splitting the data will eventually get me what I need...but I guess I was looking for something faster. At any rate, your suggestion is valuable so I upvoted your answer. The answer provided by Andrew Royal is a bit more direct and faster so I'm accepting that one.
– aaronmams
Mar 29 at 17:42
add a comment
|
Consider splitting your data with data.table's split
and run each subset on fuzzy_inner_join
then call rbindlist
to bind all data frame elements together for single output.
df_list <- data.table::split(events.df, by="individual")
fuzzy_list <- lapply(df_list, function(sub.df)
fuzzy_inner_join(sub.df, trips.df,
by = c('individual'='individual', 'time'='trip_start', 'time'='trip_end'),
match_fun = list(`==`,`>=`,`<=`)
)
)
# REMOVE TEMP OBJECT AND CALL GARBAGE COLLECTOR
rm(df_list); gc()
final_df <- rbindlist(fuzzy_list)
# REMOVE TEMP OBJECT AND CALL GARBAGE COLLECTOR
rm(fuzzy_list); gc()
thanks. I have tried implementing a segmented approach as you suggested. Informally, I've found that the fuzzy join works very slowly even on data frames including ~10K events. Splitting the data will eventually get me what I need...but I guess I was looking for something faster. At any rate, your suggestion is valuable so I upvoted your answer. The answer provided by Andrew Royal is a bit more direct and faster so I'm accepting that one.
– aaronmams
Mar 29 at 17:42
add a comment
|
Consider splitting your data with data.table's split
and run each subset on fuzzy_inner_join
then call rbindlist
to bind all data frame elements together for single output.
df_list <- data.table::split(events.df, by="individual")
fuzzy_list <- lapply(df_list, function(sub.df)
fuzzy_inner_join(sub.df, trips.df,
by = c('individual'='individual', 'time'='trip_start', 'time'='trip_end'),
match_fun = list(`==`,`>=`,`<=`)
)
)
# REMOVE TEMP OBJECT AND CALL GARBAGE COLLECTOR
rm(df_list); gc()
final_df <- rbindlist(fuzzy_list)
# REMOVE TEMP OBJECT AND CALL GARBAGE COLLECTOR
rm(fuzzy_list); gc()
Consider splitting your data with data.table's split
and run each subset on fuzzy_inner_join
then call rbindlist
to bind all data frame elements together for single output.
df_list <- data.table::split(events.df, by="individual")
fuzzy_list <- lapply(df_list, function(sub.df)
fuzzy_inner_join(sub.df, trips.df,
by = c('individual'='individual', 'time'='trip_start', 'time'='trip_end'),
match_fun = list(`==`,`>=`,`<=`)
)
)
# REMOVE TEMP OBJECT AND CALL GARBAGE COLLECTOR
rm(df_list); gc()
final_df <- rbindlist(fuzzy_list)
# REMOVE TEMP OBJECT AND CALL GARBAGE COLLECTOR
rm(fuzzy_list); gc()
edited Mar 29 at 18:08
answered Mar 29 at 17:04
ParfaitParfait
63.3k10 gold badges59 silver badges77 bronze badges
63.3k10 gold badges59 silver badges77 bronze badges
thanks. I have tried implementing a segmented approach as you suggested. Informally, I've found that the fuzzy join works very slowly even on data frames including ~10K events. Splitting the data will eventually get me what I need...but I guess I was looking for something faster. At any rate, your suggestion is valuable so I upvoted your answer. The answer provided by Andrew Royal is a bit more direct and faster so I'm accepting that one.
– aaronmams
Mar 29 at 17:42
add a comment
|
thanks. I have tried implementing a segmented approach as you suggested. Informally, I've found that the fuzzy join works very slowly even on data frames including ~10K events. Splitting the data will eventually get me what I need...but I guess I was looking for something faster. At any rate, your suggestion is valuable so I upvoted your answer. The answer provided by Andrew Royal is a bit more direct and faster so I'm accepting that one.
– aaronmams
Mar 29 at 17:42
thanks. I have tried implementing a segmented approach as you suggested. Informally, I've found that the fuzzy join works very slowly even on data frames including ~10K events. Splitting the data will eventually get me what I need...but I guess I was looking for something faster. At any rate, your suggestion is valuable so I upvoted your answer. The answer provided by Andrew Royal is a bit more direct and faster so I'm accepting that one.
– aaronmams
Mar 29 at 17:42
thanks. I have tried implementing a segmented approach as you suggested. Informally, I've found that the fuzzy join works very slowly even on data frames including ~10K events. Splitting the data will eventually get me what I need...but I guess I was looking for something faster. At any rate, your suggestion is valuable so I upvoted your answer. The answer provided by Andrew Royal is a bit more direct and faster so I'm accepting that one.
– aaronmams
Mar 29 at 17:42
add a comment
|
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55407040%2fi-want-to-understand-why-lapply-exhausts-memory-but-a-for-loop-doesnt%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
@Parfait, in the 3rd code chunk above you'll see
result <- data.frame(rbindlist(lapply(unique(trips.df$trip),trip.match)
...the lapply() is wrapped inside some code to cast the resulting list to a data frame.– aaronmams
Mar 29 at 15:52
Your fuzzjoin does not use
tripid
.– Parfait
Mar 29 at 17:02
@Parfait, yes, the fuzzy join does not join on the tripid. The idea is to attach the tripid to each event. The fuzzy join works on the individual and the time range to attach the tripid to the row for any corresponding event.
– aaronmams
Mar 29 at 17:38