shorten awk command pipes The 2019 Stack Overflow Developer Survey Results Are InPipe to/from the clipboard in Bash scriptHow to do a recursive find/replace of a string with awk or sed?What is the difference between sed and awk?Extraction of data from a simple XML fileHow to exclude a directory in find . commandawk for different delimiters piped from xargs commandExtract file content using grep content based on a line delimiterHow to grep the outputs of awk, line by line?Bash commands piped to awk are sometimes bufferedprinting after retrieving all the required fields using awk/sed/bash
How do you keep chess fun when your opponent constantly beats you?
Are there any other methods to apply to solving simultaneous equations?
Cooking pasta in a water boiler
Mathematics of imaging the black hole
What is this business jet?
What could be the right powersource for 15 seconds lifespan disposable giant chainsaw?
Output the Arecibo Message
Is bread bad for ducks?
How to notate time signature switching consistently every measure
Why are there uneven bright areas in this photo of black hole?
Accepted by European university, rejected by all American ones I applied to? Possible reasons?
Is it ok to offer lower paid work as a trial period before negotiating for a full-time job?
Old scifi movie from the 50s or 60s with men in solid red uniforms who interrogate a spy from the past
Can an undergraduate be advised by a professor who is very far away?
Match Roman Numerals
Finding the area between two curves with Integrate
What does Linus Torvalds mean when he says that Git "never ever" tracks a file?
How do PCB vias affect signal quality?
What is the motivation for a law requiring 2 parties to consent for recording a conversation
Why doesn't shell automatically fix "useless use of cat"?
What do hard-Brexiteers want with respect to the Irish border?
Is it okay to consider publishing in my first year of PhD?
Is it ethical to upload a automatically generated paper to a non peer-reviewed site as part of a larger research?
How to quickly solve partial fractions equation?
shorten awk command pipes
The 2019 Stack Overflow Developer Survey Results Are InPipe to/from the clipboard in Bash scriptHow to do a recursive find/replace of a string with awk or sed?What is the difference between sed and awk?Extraction of data from a simple XML fileHow to exclude a directory in find . commandawk for different delimiters piped from xargs commandExtract file content using grep content based on a line delimiterHow to grep the outputs of awk, line by line?Bash commands piped to awk are sometimes bufferedprinting after retrieving all the required fields using awk/sed/bash
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
fairly new to using linux on shell.
I want to reduce the amount of pipes I used to extract the following data.
V 190917135635Z 1005 unknown /C=DE/ST=City/L=City/O=something/OU=Somewhat/CN=someserver.com/emailAddress=test@toast.com
My goal is to put the following values into a separate file
190917135635 someserver.com
The command I use right now is fairly long, piped and looks like this
grep -v '^R' $file | awk 'print $2, $6' | awk -F'[=|/]' 'print $1, $3' | awk 'print $1, $3' | awk -F 'Z ' 'print $1, $2' > sdata.txt
(The file contains other lines starting with 'R' so I exclude those in my grep)
Is this a legit way of doing it?
Is there a way to get this in a shorter command?
Thanks a lot!
linux awk
|
show 3 more comments
fairly new to using linux on shell.
I want to reduce the amount of pipes I used to extract the following data.
V 190917135635Z 1005 unknown /C=DE/ST=City/L=City/O=something/OU=Somewhat/CN=someserver.com/emailAddress=test@toast.com
My goal is to put the following values into a separate file
190917135635 someserver.com
The command I use right now is fairly long, piped and looks like this
grep -v '^R' $file | awk 'print $2, $6' | awk -F'[=|/]' 'print $1, $3' | awk 'print $1, $3' | awk -F 'Z ' 'print $1, $2' > sdata.txt
(The file contains other lines starting with 'R' so I exclude those in my grep)
Is this a legit way of doing it?
Is there a way to get this in a shorter command?
Thanks a lot!
linux awk
what is the delimiter?
– psychoCoder
Mar 22 at 4:55
Welcome to SO, please post samples of input and expected output in your post and let us know then.
– RavinderSingh13
Mar 22 at 4:58
You can get rid of the grep entirely:awk '/^[^R]/ ... ' $file
– Shawn
Mar 22 at 5:11
awk -F ' +|Z *|=|/' 'print $2,$16' file
?
– Cyrus
Mar 22 at 5:17
awk -F'[ tZ=/]+' '!/^R/print $2,$16'
– jhnc
Mar 22 at 6:07
|
show 3 more comments
fairly new to using linux on shell.
I want to reduce the amount of pipes I used to extract the following data.
V 190917135635Z 1005 unknown /C=DE/ST=City/L=City/O=something/OU=Somewhat/CN=someserver.com/emailAddress=test@toast.com
My goal is to put the following values into a separate file
190917135635 someserver.com
The command I use right now is fairly long, piped and looks like this
grep -v '^R' $file | awk 'print $2, $6' | awk -F'[=|/]' 'print $1, $3' | awk 'print $1, $3' | awk -F 'Z ' 'print $1, $2' > sdata.txt
(The file contains other lines starting with 'R' so I exclude those in my grep)
Is this a legit way of doing it?
Is there a way to get this in a shorter command?
Thanks a lot!
linux awk
fairly new to using linux on shell.
I want to reduce the amount of pipes I used to extract the following data.
V 190917135635Z 1005 unknown /C=DE/ST=City/L=City/O=something/OU=Somewhat/CN=someserver.com/emailAddress=test@toast.com
My goal is to put the following values into a separate file
190917135635 someserver.com
The command I use right now is fairly long, piped and looks like this
grep -v '^R' $file | awk 'print $2, $6' | awk -F'[=|/]' 'print $1, $3' | awk 'print $1, $3' | awk -F 'Z ' 'print $1, $2' > sdata.txt
(The file contains other lines starting with 'R' so I exclude those in my grep)
Is this a legit way of doing it?
Is there a way to get this in a shorter command?
Thanks a lot!
linux awk
linux awk
edited Mar 22 at 8:59
Inian
41.6k64475
41.6k64475
asked Mar 22 at 4:51
uptoyauptoya
83
83
what is the delimiter?
– psychoCoder
Mar 22 at 4:55
Welcome to SO, please post samples of input and expected output in your post and let us know then.
– RavinderSingh13
Mar 22 at 4:58
You can get rid of the grep entirely:awk '/^[^R]/ ... ' $file
– Shawn
Mar 22 at 5:11
awk -F ' +|Z *|=|/' 'print $2,$16' file
?
– Cyrus
Mar 22 at 5:17
awk -F'[ tZ=/]+' '!/^R/print $2,$16'
– jhnc
Mar 22 at 6:07
|
show 3 more comments
what is the delimiter?
– psychoCoder
Mar 22 at 4:55
Welcome to SO, please post samples of input and expected output in your post and let us know then.
– RavinderSingh13
Mar 22 at 4:58
You can get rid of the grep entirely:awk '/^[^R]/ ... ' $file
– Shawn
Mar 22 at 5:11
awk -F ' +|Z *|=|/' 'print $2,$16' file
?
– Cyrus
Mar 22 at 5:17
awk -F'[ tZ=/]+' '!/^R/print $2,$16'
– jhnc
Mar 22 at 6:07
what is the delimiter?
– psychoCoder
Mar 22 at 4:55
what is the delimiter?
– psychoCoder
Mar 22 at 4:55
Welcome to SO, please post samples of input and expected output in your post and let us know then.
– RavinderSingh13
Mar 22 at 4:58
Welcome to SO, please post samples of input and expected output in your post and let us know then.
– RavinderSingh13
Mar 22 at 4:58
You can get rid of the grep entirely:
awk '/^[^R]/ ... ' $file
– Shawn
Mar 22 at 5:11
You can get rid of the grep entirely:
awk '/^[^R]/ ... ' $file
– Shawn
Mar 22 at 5:11
awk -F ' +|Z *|=|/' 'print $2,$16' file
?– Cyrus
Mar 22 at 5:17
awk -F ' +|Z *|=|/' 'print $2,$16' file
?– Cyrus
Mar 22 at 5:17
awk -F'[ tZ=/]+' '!/^R/print $2,$16'
– jhnc
Mar 22 at 6:07
awk -F'[ tZ=/]+' '!/^R/print $2,$16'
– jhnc
Mar 22 at 6:07
|
show 3 more comments
6 Answers
6
active
oldest
votes
Another awk. Using match
to find CN entry and substr
to extract it for print
to print, if it exists.
$ awk '!/^R/
print $2,
(match($0,/CN=[^/]+/)?substr($0,RSTART+3,RLENGTH-3):"") # 3==length("CN=")
' file
Output:
190917135635Z someserver.com
1
You forgot the exclusion of lines starting withR
– kvantour
Mar 22 at 7:34
This is by far the safest option. As we do not know how the final string realy looks like (sample data can look different from line to line), then a search forCN
is by far the safest option. Much better than hardcoding field numbers with a random Field Separator.
– kvantour
Mar 22 at 7:36
add a comment |
Looks some of your data fields are used as creating SSL certificates, thus many fields might contain SPACES, i.e. City, Organization Name etc. That's why you need many awk lines(???). Here is one way which might help you overcome these issues. So instead of transforming your existing code logic, the target is to find the domain name by searching the substring CN=
and fetching its corresponding value.
awk '
!/^R/
start = index($0, "CN=")+3
end = index(substr($0, start), "/")
domain = end ? substr($0, start, end-1) : substr($0, start)
print $2, domain
' file.txt
Where:
- we use
index()
to find the start-position of the substringCN=
,+3
will be the starting point of the domain name - then we search the next
/
to get the end-position of this domain. if it's at the end of the line, there will be no/
and thusend
will be '0' - then we get the domain name between the substring
CN=
and the next '/' by usingsubstr($0, start, end-1)
or the end of line by usingsubstr($0, start)
.
A short version:
awk '!/^R/s=index($0, "CN=")+3; e=index(substr($0, s), "/"); print $2, substr($0, s, e ? e-1 : 253)' file.txt
where 253
is the longest possible domain name which might be enough to fit your needs.
Update:
Actually, it's much easier just use match()
, but the point is the same:
awk '!/^R/if(match($0, "/CN=([^/]*)")) print $2, substr($0, RSTART+4, RLENGTH-4)' file.txt
add a comment |
If this:
$ awk -F'[[:space:]/=]+' '!/^R/print $2+0, $16' file
190917135635 someserver.com
isn't all you need then updated your question to clarify your requirements and provide more truly representative sample input/output.
add a comment |
Using GNU sed
:
sed -E -n '/^R/d; s/^[A-Za-z]s+([0-9]+)s+[0-9]+s+.*/CN=(.*)/.*/1 2/p' input_file > new_file
add a comment |
EDIT: Strictly considering that OP's Input_file is same as shown samples only. After seeing OP's samples one could try following.
awk -F"[ =/Z]" '!/^R/print $8,$37' Input_file
For FUN :) in case one want to try in OP's approach then we could try following.
awk '
!/^R/
val=$2 OFS $5
split(val,array,"[ /Z]")
val1=array[1] OFS array[9] OFS array[10]
split(val1,array1,"[ =]")
print array1[1],array1[3]
' Input_file
1
Thank you! This shortens it quite a bit. Like one comment mentioned, I had a little twist by removing a space to mask the data. I just put in your code with $38 and it spits out just what I wanted : )
– uptoya
Mar 22 at 11:16
@uptoya, please do mention here, for reason for removing this answer as correct one?
– RavinderSingh13
Apr 3 at 14:37
add a comment |
You are using $6
in the second awk
command, that means your 5th column has potentially spaces inside unlike the sample data you showed, also it is extracting CN=
part (CNAME?).
So here's a more compatible and more exact sed
way which does not require GNU sed:
sed -n -e '/^R/!p;'
If you just want digits in the second column and it begins with digit, then you can change to use this:
sed -n -e '/^R/!p;'
1
yes .. the real data had another space in between, thank you for pointing it out : )
– uptoya
Mar 22 at 11:17
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55293105%2fshorten-awk-command-pipes%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
6 Answers
6
active
oldest
votes
6 Answers
6
active
oldest
votes
active
oldest
votes
active
oldest
votes
Another awk. Using match
to find CN entry and substr
to extract it for print
to print, if it exists.
$ awk '!/^R/
print $2,
(match($0,/CN=[^/]+/)?substr($0,RSTART+3,RLENGTH-3):"") # 3==length("CN=")
' file
Output:
190917135635Z someserver.com
1
You forgot the exclusion of lines starting withR
– kvantour
Mar 22 at 7:34
This is by far the safest option. As we do not know how the final string realy looks like (sample data can look different from line to line), then a search forCN
is by far the safest option. Much better than hardcoding field numbers with a random Field Separator.
– kvantour
Mar 22 at 7:36
add a comment |
Another awk. Using match
to find CN entry and substr
to extract it for print
to print, if it exists.
$ awk '!/^R/
print $2,
(match($0,/CN=[^/]+/)?substr($0,RSTART+3,RLENGTH-3):"") # 3==length("CN=")
' file
Output:
190917135635Z someserver.com
1
You forgot the exclusion of lines starting withR
– kvantour
Mar 22 at 7:34
This is by far the safest option. As we do not know how the final string realy looks like (sample data can look different from line to line), then a search forCN
is by far the safest option. Much better than hardcoding field numbers with a random Field Separator.
– kvantour
Mar 22 at 7:36
add a comment |
Another awk. Using match
to find CN entry and substr
to extract it for print
to print, if it exists.
$ awk '!/^R/
print $2,
(match($0,/CN=[^/]+/)?substr($0,RSTART+3,RLENGTH-3):"") # 3==length("CN=")
' file
Output:
190917135635Z someserver.com
Another awk. Using match
to find CN entry and substr
to extract it for print
to print, if it exists.
$ awk '!/^R/
print $2,
(match($0,/CN=[^/]+/)?substr($0,RSTART+3,RLENGTH-3):"") # 3==length("CN=")
' file
Output:
190917135635Z someserver.com
edited Mar 22 at 7:37
kvantour
10.8k41734
10.8k41734
answered Mar 22 at 7:29
James BrownJames Brown
20.5k42037
20.5k42037
1
You forgot the exclusion of lines starting withR
– kvantour
Mar 22 at 7:34
This is by far the safest option. As we do not know how the final string realy looks like (sample data can look different from line to line), then a search forCN
is by far the safest option. Much better than hardcoding field numbers with a random Field Separator.
– kvantour
Mar 22 at 7:36
add a comment |
1
You forgot the exclusion of lines starting withR
– kvantour
Mar 22 at 7:34
This is by far the safest option. As we do not know how the final string realy looks like (sample data can look different from line to line), then a search forCN
is by far the safest option. Much better than hardcoding field numbers with a random Field Separator.
– kvantour
Mar 22 at 7:36
1
1
You forgot the exclusion of lines starting with
R
– kvantour
Mar 22 at 7:34
You forgot the exclusion of lines starting with
R
– kvantour
Mar 22 at 7:34
This is by far the safest option. As we do not know how the final string realy looks like (sample data can look different from line to line), then a search for
CN
is by far the safest option. Much better than hardcoding field numbers with a random Field Separator.– kvantour
Mar 22 at 7:36
This is by far the safest option. As we do not know how the final string realy looks like (sample data can look different from line to line), then a search for
CN
is by far the safest option. Much better than hardcoding field numbers with a random Field Separator.– kvantour
Mar 22 at 7:36
add a comment |
Looks some of your data fields are used as creating SSL certificates, thus many fields might contain SPACES, i.e. City, Organization Name etc. That's why you need many awk lines(???). Here is one way which might help you overcome these issues. So instead of transforming your existing code logic, the target is to find the domain name by searching the substring CN=
and fetching its corresponding value.
awk '
!/^R/
start = index($0, "CN=")+3
end = index(substr($0, start), "/")
domain = end ? substr($0, start, end-1) : substr($0, start)
print $2, domain
' file.txt
Where:
- we use
index()
to find the start-position of the substringCN=
,+3
will be the starting point of the domain name - then we search the next
/
to get the end-position of this domain. if it's at the end of the line, there will be no/
and thusend
will be '0' - then we get the domain name between the substring
CN=
and the next '/' by usingsubstr($0, start, end-1)
or the end of line by usingsubstr($0, start)
.
A short version:
awk '!/^R/s=index($0, "CN=")+3; e=index(substr($0, s), "/"); print $2, substr($0, s, e ? e-1 : 253)' file.txt
where 253
is the longest possible domain name which might be enough to fit your needs.
Update:
Actually, it's much easier just use match()
, but the point is the same:
awk '!/^R/if(match($0, "/CN=([^/]*)")) print $2, substr($0, RSTART+4, RLENGTH-4)' file.txt
add a comment |
Looks some of your data fields are used as creating SSL certificates, thus many fields might contain SPACES, i.e. City, Organization Name etc. That's why you need many awk lines(???). Here is one way which might help you overcome these issues. So instead of transforming your existing code logic, the target is to find the domain name by searching the substring CN=
and fetching its corresponding value.
awk '
!/^R/
start = index($0, "CN=")+3
end = index(substr($0, start), "/")
domain = end ? substr($0, start, end-1) : substr($0, start)
print $2, domain
' file.txt
Where:
- we use
index()
to find the start-position of the substringCN=
,+3
will be the starting point of the domain name - then we search the next
/
to get the end-position of this domain. if it's at the end of the line, there will be no/
and thusend
will be '0' - then we get the domain name between the substring
CN=
and the next '/' by usingsubstr($0, start, end-1)
or the end of line by usingsubstr($0, start)
.
A short version:
awk '!/^R/s=index($0, "CN=")+3; e=index(substr($0, s), "/"); print $2, substr($0, s, e ? e-1 : 253)' file.txt
where 253
is the longest possible domain name which might be enough to fit your needs.
Update:
Actually, it's much easier just use match()
, but the point is the same:
awk '!/^R/if(match($0, "/CN=([^/]*)")) print $2, substr($0, RSTART+4, RLENGTH-4)' file.txt
add a comment |
Looks some of your data fields are used as creating SSL certificates, thus many fields might contain SPACES, i.e. City, Organization Name etc. That's why you need many awk lines(???). Here is one way which might help you overcome these issues. So instead of transforming your existing code logic, the target is to find the domain name by searching the substring CN=
and fetching its corresponding value.
awk '
!/^R/
start = index($0, "CN=")+3
end = index(substr($0, start), "/")
domain = end ? substr($0, start, end-1) : substr($0, start)
print $2, domain
' file.txt
Where:
- we use
index()
to find the start-position of the substringCN=
,+3
will be the starting point of the domain name - then we search the next
/
to get the end-position of this domain. if it's at the end of the line, there will be no/
and thusend
will be '0' - then we get the domain name between the substring
CN=
and the next '/' by usingsubstr($0, start, end-1)
or the end of line by usingsubstr($0, start)
.
A short version:
awk '!/^R/s=index($0, "CN=")+3; e=index(substr($0, s), "/"); print $2, substr($0, s, e ? e-1 : 253)' file.txt
where 253
is the longest possible domain name which might be enough to fit your needs.
Update:
Actually, it's much easier just use match()
, but the point is the same:
awk '!/^R/if(match($0, "/CN=([^/]*)")) print $2, substr($0, RSTART+4, RLENGTH-4)' file.txt
Looks some of your data fields are used as creating SSL certificates, thus many fields might contain SPACES, i.e. City, Organization Name etc. That's why you need many awk lines(???). Here is one way which might help you overcome these issues. So instead of transforming your existing code logic, the target is to find the domain name by searching the substring CN=
and fetching its corresponding value.
awk '
!/^R/
start = index($0, "CN=")+3
end = index(substr($0, start), "/")
domain = end ? substr($0, start, end-1) : substr($0, start)
print $2, domain
' file.txt
Where:
- we use
index()
to find the start-position of the substringCN=
,+3
will be the starting point of the domain name - then we search the next
/
to get the end-position of this domain. if it's at the end of the line, there will be no/
and thusend
will be '0' - then we get the domain name between the substring
CN=
and the next '/' by usingsubstr($0, start, end-1)
or the end of line by usingsubstr($0, start)
.
A short version:
awk '!/^R/s=index($0, "CN=")+3; e=index(substr($0, s), "/"); print $2, substr($0, s, e ? e-1 : 253)' file.txt
where 253
is the longest possible domain name which might be enough to fit your needs.
Update:
Actually, it's much easier just use match()
, but the point is the same:
awk '!/^R/if(match($0, "/CN=([^/]*)")) print $2, substr($0, RSTART+4, RLENGTH-4)' file.txt
edited Mar 22 at 7:25
answered Mar 22 at 6:49
jxcjxc
1,278139
1,278139
add a comment |
add a comment |
If this:
$ awk -F'[[:space:]/=]+' '!/^R/print $2+0, $16' file
190917135635 someserver.com
isn't all you need then updated your question to clarify your requirements and provide more truly representative sample input/output.
add a comment |
If this:
$ awk -F'[[:space:]/=]+' '!/^R/print $2+0, $16' file
190917135635 someserver.com
isn't all you need then updated your question to clarify your requirements and provide more truly representative sample input/output.
add a comment |
If this:
$ awk -F'[[:space:]/=]+' '!/^R/print $2+0, $16' file
190917135635 someserver.com
isn't all you need then updated your question to clarify your requirements and provide more truly representative sample input/output.
If this:
$ awk -F'[[:space:]/=]+' '!/^R/print $2+0, $16' file
190917135635 someserver.com
isn't all you need then updated your question to clarify your requirements and provide more truly representative sample input/output.
answered Mar 22 at 13:33
Ed MortonEd Morton
113k1245103
113k1245103
add a comment |
add a comment |
Using GNU sed
:
sed -E -n '/^R/d; s/^[A-Za-z]s+([0-9]+)s+[0-9]+s+.*/CN=(.*)/.*/1 2/p' input_file > new_file
add a comment |
Using GNU sed
:
sed -E -n '/^R/d; s/^[A-Za-z]s+([0-9]+)s+[0-9]+s+.*/CN=(.*)/.*/1 2/p' input_file > new_file
add a comment |
Using GNU sed
:
sed -E -n '/^R/d; s/^[A-Za-z]s+([0-9]+)s+[0-9]+s+.*/CN=(.*)/.*/1 2/p' input_file > new_file
Using GNU sed
:
sed -E -n '/^R/d; s/^[A-Za-z]s+([0-9]+)s+[0-9]+s+.*/CN=(.*)/.*/1 2/p' input_file > new_file
edited Mar 22 at 6:45
answered Mar 22 at 6:12
User123User123
648414
648414
add a comment |
add a comment |
EDIT: Strictly considering that OP's Input_file is same as shown samples only. After seeing OP's samples one could try following.
awk -F"[ =/Z]" '!/^R/print $8,$37' Input_file
For FUN :) in case one want to try in OP's approach then we could try following.
awk '
!/^R/
val=$2 OFS $5
split(val,array,"[ /Z]")
val1=array[1] OFS array[9] OFS array[10]
split(val1,array1,"[ =]")
print array1[1],array1[3]
' Input_file
1
Thank you! This shortens it quite a bit. Like one comment mentioned, I had a little twist by removing a space to mask the data. I just put in your code with $38 and it spits out just what I wanted : )
– uptoya
Mar 22 at 11:16
@uptoya, please do mention here, for reason for removing this answer as correct one?
– RavinderSingh13
Apr 3 at 14:37
add a comment |
EDIT: Strictly considering that OP's Input_file is same as shown samples only. After seeing OP's samples one could try following.
awk -F"[ =/Z]" '!/^R/print $8,$37' Input_file
For FUN :) in case one want to try in OP's approach then we could try following.
awk '
!/^R/
val=$2 OFS $5
split(val,array,"[ /Z]")
val1=array[1] OFS array[9] OFS array[10]
split(val1,array1,"[ =]")
print array1[1],array1[3]
' Input_file
1
Thank you! This shortens it quite a bit. Like one comment mentioned, I had a little twist by removing a space to mask the data. I just put in your code with $38 and it spits out just what I wanted : )
– uptoya
Mar 22 at 11:16
@uptoya, please do mention here, for reason for removing this answer as correct one?
– RavinderSingh13
Apr 3 at 14:37
add a comment |
EDIT: Strictly considering that OP's Input_file is same as shown samples only. After seeing OP's samples one could try following.
awk -F"[ =/Z]" '!/^R/print $8,$37' Input_file
For FUN :) in case one want to try in OP's approach then we could try following.
awk '
!/^R/
val=$2 OFS $5
split(val,array,"[ /Z]")
val1=array[1] OFS array[9] OFS array[10]
split(val1,array1,"[ =]")
print array1[1],array1[3]
' Input_file
EDIT: Strictly considering that OP's Input_file is same as shown samples only. After seeing OP's samples one could try following.
awk -F"[ =/Z]" '!/^R/print $8,$37' Input_file
For FUN :) in case one want to try in OP's approach then we could try following.
awk '
!/^R/
val=$2 OFS $5
split(val,array,"[ /Z]")
val1=array[1] OFS array[9] OFS array[10]
split(val1,array1,"[ =]")
print array1[1],array1[3]
' Input_file
edited Mar 22 at 9:29
answered Mar 22 at 5:11
RavinderSingh13RavinderSingh13
30.8k41639
30.8k41639
1
Thank you! This shortens it quite a bit. Like one comment mentioned, I had a little twist by removing a space to mask the data. I just put in your code with $38 and it spits out just what I wanted : )
– uptoya
Mar 22 at 11:16
@uptoya, please do mention here, for reason for removing this answer as correct one?
– RavinderSingh13
Apr 3 at 14:37
add a comment |
1
Thank you! This shortens it quite a bit. Like one comment mentioned, I had a little twist by removing a space to mask the data. I just put in your code with $38 and it spits out just what I wanted : )
– uptoya
Mar 22 at 11:16
@uptoya, please do mention here, for reason for removing this answer as correct one?
– RavinderSingh13
Apr 3 at 14:37
1
1
Thank you! This shortens it quite a bit. Like one comment mentioned, I had a little twist by removing a space to mask the data. I just put in your code with $38 and it spits out just what I wanted : )
– uptoya
Mar 22 at 11:16
Thank you! This shortens it quite a bit. Like one comment mentioned, I had a little twist by removing a space to mask the data. I just put in your code with $38 and it spits out just what I wanted : )
– uptoya
Mar 22 at 11:16
@uptoya, please do mention here, for reason for removing this answer as correct one?
– RavinderSingh13
Apr 3 at 14:37
@uptoya, please do mention here, for reason for removing this answer as correct one?
– RavinderSingh13
Apr 3 at 14:37
add a comment |
You are using $6
in the second awk
command, that means your 5th column has potentially spaces inside unlike the sample data you showed, also it is extracting CN=
part (CNAME?).
So here's a more compatible and more exact sed
way which does not require GNU sed:
sed -n -e '/^R/!p;'
If you just want digits in the second column and it begins with digit, then you can change to use this:
sed -n -e '/^R/!p;'
1
yes .. the real data had another space in between, thank you for pointing it out : )
– uptoya
Mar 22 at 11:17
add a comment |
You are using $6
in the second awk
command, that means your 5th column has potentially spaces inside unlike the sample data you showed, also it is extracting CN=
part (CNAME?).
So here's a more compatible and more exact sed
way which does not require GNU sed:
sed -n -e '/^R/!p;'
If you just want digits in the second column and it begins with digit, then you can change to use this:
sed -n -e '/^R/!p;'
1
yes .. the real data had another space in between, thank you for pointing it out : )
– uptoya
Mar 22 at 11:17
add a comment |
You are using $6
in the second awk
command, that means your 5th column has potentially spaces inside unlike the sample data you showed, also it is extracting CN=
part (CNAME?).
So here's a more compatible and more exact sed
way which does not require GNU sed:
sed -n -e '/^R/!p;'
If you just want digits in the second column and it begins with digit, then you can change to use this:
sed -n -e '/^R/!p;'
You are using $6
in the second awk
command, that means your 5th column has potentially spaces inside unlike the sample data you showed, also it is extracting CN=
part (CNAME?).
So here's a more compatible and more exact sed
way which does not require GNU sed:
sed -n -e '/^R/!p;'
If you just want digits in the second column and it begins with digit, then you can change to use this:
sed -n -e '/^R/!p;'
edited Mar 22 at 9:35
answered Mar 22 at 6:54
TiwTiw
4,39761730
4,39761730
1
yes .. the real data had another space in between, thank you for pointing it out : )
– uptoya
Mar 22 at 11:17
add a comment |
1
yes .. the real data had another space in between, thank you for pointing it out : )
– uptoya
Mar 22 at 11:17
1
1
yes .. the real data had another space in between, thank you for pointing it out : )
– uptoya
Mar 22 at 11:17
yes .. the real data had another space in between, thank you for pointing it out : )
– uptoya
Mar 22 at 11:17
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55293105%2fshorten-awk-command-pipes%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
what is the delimiter?
– psychoCoder
Mar 22 at 4:55
Welcome to SO, please post samples of input and expected output in your post and let us know then.
– RavinderSingh13
Mar 22 at 4:58
You can get rid of the grep entirely:
awk '/^[^R]/ ... ' $file
– Shawn
Mar 22 at 5:11
awk -F ' +|Z *|=|/' 'print $2,$16' file
?– Cyrus
Mar 22 at 5:17
awk -F'[ tZ=/]+' '!/^R/print $2,$16'
– jhnc
Mar 22 at 6:07