I finished first version of cookbook on Perl one-liners recently. With that, five of the major chapters from that repo are now accessible as better formatted ebooks, updated for newer software versions, exercises, solutions, etc.
You can download pdf/epub versions of the ebooks using the links below (free until this Sunday)
The reason I ask is that I've never figured out a good way to make use of this type of material. Reading idly means I retain almost nothing. I try to avoid spaced repetition on things I don't actually use (that has turned out to be a good filter to avoid over-adding flashcards).
Whenever I have a problem that can be solved by a Perl one-liner, there are two obstacles to using something like your book:
1. Figuring out that my problem is solvable with a Perl one-liner in the first place, and
2. Finding the right patterns in the book to piece together to achieve my goal.
Have you considered any sort of tiered index to allow one to classify a problem at hand and look up the relevant structures int he book?
To pick a random line from a stream of unknown length in a single pass while only storing one line. With some more options it will handle a fortune file (or any stream of things that you can go through by record).
> Filtering ... perl one-liners can be used for filtering lines matched by a regexp, similar to grep, sed and awk.
In this section, the examples do stuff that are easy to do with grep, sed, or awk, but since perl is more powerful, wouldn't it be better to use examples that aren't as easy to do with those other tools?
For example, I recently found I could use short perl one-liners to filter files in a pipe like using `[[ -f $file ]]` per line. I used to use stest from dmenu like this:
pacman -Qlq $pkg | stest -f
but now I see I could instead do the following anywhere without installing a special package:
This one is interesting, because I've come across this issue many times where "uniq" will not work the way you perhaps expect it to (if you, like me, don't read "man" first), and will give back unique values from lines later in the file multiple times. Normally I've found that the way to work around this is to sort the output first before passing it to uniq, so I was curious if the perl way was faster or not.
I wrote 10M random numbers between 1-3 to a text file and tried this benchmark:
Almost twice as fast. Lesson learned I guess, don't knock on Perl for CLI wrangling :)
I can share one of my frequently used ones as well: if you have a log file with an initial timestamp that is UNIX epoch, you can pipe the text to this perl command to convert it to localtime:
If you remove `cat`, it gives me nearly identical run time on my machine. I used `shuf -r -n10000000 -i 1-3 > t1` as input. Also, output won't be same, as the perl solution retains input order. `sort -nu` is faster than `sort+uniq` combo.
perl -ne 'print if !$h{$_}++'
runs faster (but this might depend on input, not sure if it is always faster than uniq) than the `sort -u` or `sort+uniq` combo
what's surprising to me is that this was faster than
I think it's very widely used for its original purpose, which is a shell replacement. When I need a short script, particularly for text processing, I'll usually reach for Perl now that the Perl 5/Perl 6 thing has been resolved. Just look at the activity on CPAN:
https://metacpan.org/recent
Yes, absolutely, perl is part of my standard toolbox for some specific tasks, and I'd say I use it at least once a week.
For example, for short scripts to process/massage data - either one shot or in a production pipeline -, it's very expressive and "to the point", especially if you're going to use regexps.
perl is in particular way less verbose than e.g. python when it comes to regexes coz regexes are built into the basic syntax of the language.
OTOH, there are many, many things that are broken in perl, but if I had to cite two things that made me kind of give up on it for new large and mid-level complexity projects these would be:
- near impossible to predict perl's handling of "numerical" values
- having to type dollar signs to dereference variables
There are Perl interface libraries for everything, so it is often a really great choice, even if it is no longer sexy now that the peak CGI-days have gone.
One thing that I like about Go is that it promotes testing as something that should be required, but in my experience the vast majority of Perl modules also come with very good test-case coverage, it being the source of the TAP format after all!
I've written a few new sites and APIs using the CGI::Application framework over the past few years, and I'll often knock up simple transformation scripts in Perl to glue APIs together and perform ad-hoc transformation of various tools.
Although I've mostly started writing new services in golang I don't feel the need to go back and rewrite stable systems in perl.
Many industries use Perl 5 to keep things running. The semiconductor industry, in particular uses a lot of existing, and develops a lot of new, Perl code to do things commercial software cannot.
There are "real" companies using Perl presently...Booking and DuckDuckGo and others. Looking at Redmonk, Tiobe, other surveys...Perl isn't at the top of the heap by any measure but it also stubbornly refuses to die. I write Perl every day and frankly don't care if people think it is "dead", and even I am surprised at how prominent it figures in language surveys, still.
Most of the key libraries I use are updated regularly. Perl itself is under active development with regular releases.
You probably jest, but perl introduced simple arrays and hashes long before introducing references, and this created a lot of confusion later. Very simple json-like structures are pretty confusing in perl now. In most other dynamic languages distinction between array and a reference to an array is nonexistent, thankfully.
There're some actually exotic data structures, like pseudo-hashes. And perl can do this!
$x = \$x
But having switched to ruby 15 years ago I never missed this particular quirk.
[+] [-] asicsp|5 years ago|reply
I had started tutorials on command line text processing (https://github.com/learnbyexample/Command-line-text-processi...) more than three years back. I learnt a lot writing them and continue to learn more with experience.
I finished first version of cookbook on Perl one-liners recently. With that, five of the major chapters from that repo are now accessible as better formatted ebooks, updated for newer software versions, exercises, solutions, etc.
You can download pdf/epub versions of the ebooks using the links below (free until this Sunday)
* Perl one-liners cookbook: https://gumroad.com/l/perl-oneliners or https://leanpub.com/perl-oneliners
* Five book bundle of grep/sed/awk/perl/ruby one-liners: https://gumroad.com/l/oneliners or https://leanpub.com/b/oneliners
---
I'd highly appreciate your feedback and hope that you find these resources useful. Happy learning and stay safe :)
[+] [-] kqr|5 years ago|reply
The reason I ask is that I've never figured out a good way to make use of this type of material. Reading idly means I retain almost nothing. I try to avoid spaced repetition on things I don't actually use (that has turned out to be a good filter to avoid over-adding flashcards).
Whenever I have a problem that can be solved by a Perl one-liner, there are two obstacles to using something like your book:
1. Figuring out that my problem is solvable with a Perl one-liner in the first place, and
2. Finding the right patterns in the book to piece together to achieve my goal.
Have you considered any sort of tiered index to allow one to classify a problem at hand and look up the relevant structures int he book?
[+] [-] rStar|5 years ago|reply
[+] [-] zengargoyle|5 years ago|reply
[+] [-] Donckele|5 years ago|reply
[+] [-] jolmg|5 years ago|reply
In this section, the examples do stuff that are easy to do with grep, sed, or awk, but since perl is more powerful, wouldn't it be better to use examples that aren't as easy to do with those other tools?
For example, I recently found I could use short perl one-liners to filter files in a pipe like using `[[ -f $file ]]` per line. I used to use stest from dmenu like this:
but now I see I could instead do the following anywhere without installing a special package:[+] [-] asicsp|5 years ago|reply
hmm, food for thought...I wasn't thinking along those lines after I had shown/linked powerful one-liners in the very first section: https://learnbyexample.github.io/learn_perl_oneliners/one-li...
I'll see if I can also add such examples in the actual content too. I was concentrating more on explaining the syntax, showing features, etc
[+] [-] hnarn|5 years ago|reply
This one is interesting, because I've come across this issue many times where "uniq" will not work the way you perhaps expect it to (if you, like me, don't read "man" first), and will give back unique values from lines later in the file multiple times. Normally I've found that the way to work around this is to sort the output first before passing it to uniq, so I was curious if the perl way was faster or not.
I wrote 10M random numbers between 1-3 to a text file and tried this benchmark:
> time cat hn.txt | sort -n | uniq
real 0m3,929s
> time cat hn.txt | perl -MList::Util=uniq -e 'print uniq <>'
real 0m2,205s
Almost twice as fast. Lesson learned I guess, don't knock on Perl for CLI wrangling :)
I can share one of my frequently used ones as well: if you have a log file with an initial timestamp that is UNIX epoch, you can pipe the text to this perl command to convert it to localtime:
> cat logfile | perl -pe 's/(\d+)/localtime($1)/e'
(I'm sure you could do this with something other than Perl as well, but it does the job well)
[+] [-] asicsp|5 years ago|reply
what's surprising to me is that this was faster than
[+] [-] tyingq|5 years ago|reply
[+] [-] randmeerkat|5 years ago|reply
[+] [-] bachmeier|5 years ago|reply
There are two heavily developed/used web frameworks for Perl: https://mojolicious.org/ and http://perldancer.org/
Users of Mojolicious: https://github.com/mojolicious/mojo/wiki/Projects-and-Compan...
[+] [-] ur-whale|5 years ago|reply
For example, for short scripts to process/massage data - either one shot or in a production pipeline -, it's very expressive and "to the point", especially if you're going to use regexps.
perl is in particular way less verbose than e.g. python when it comes to regexes coz regexes are built into the basic syntax of the language.
OTOH, there are many, many things that are broken in perl, but if I had to cite two things that made me kind of give up on it for new large and mid-level complexity projects these would be: - near impossible to predict perl's handling of "numerical" values - having to type dollar signs to dereference variables
[+] [-] stevekemp|5 years ago|reply
One thing that I like about Go is that it promotes testing as something that should be required, but in my experience the vast majority of Perl modules also come with very good test-case coverage, it being the source of the TAP format after all!
I've written a few new sites and APIs using the CGI::Application framework over the past few years, and I'll often knock up simple transformation scripts in Perl to glue APIs together and perform ad-hoc transformation of various tools.
Although I've mostly started writing new services in golang I don't feel the need to go back and rewrite stable systems in perl.
[+] [-] hpcjoe|5 years ago|reply
I use perl daily in my work on supercomputers and hpc systems. My code helps configure and control them. It helps simplify users work.
No. Perl is not dead by any measure.
[+] [-] mtnGoat|5 years ago|reply
[+] [-] reddit_clone|5 years ago|reply
[+] [-] moltar|5 years ago|reply
[+] [-] 1996|5 years ago|reply
[+] [-] forinti|5 years ago|reply
I also have a few Telegram bots written in Perl.
[+] [-] sigzero|5 years ago|reply
[+] [-] jdoege|5 years ago|reply
[+] [-] asicsp|5 years ago|reply
[+] [-] jgoodknight|5 years ago|reply
[+] [-] AQXt|5 years ago|reply
It is both "legacy" and "active development".
s/legacy/valuable/g;
[+] [-] embit|5 years ago|reply
[+] [-] claydavisss|5 years ago|reply
Most of the key libraries I use are updated regularly. Perl itself is under active development with regular releases.
[+] [-] redis_mlc|5 years ago|reply
[deleted]
[+] [-] bryanrasmussen|5 years ago|reply
[+] [-] known|5 years ago|reply
[+] [-] codesnik|5 years ago|reply
There're some actually exotic data structures, like pseudo-hashes. And perl can do this! $x = \$x
But having switched to ruby 15 years ago I never missed this particular quirk.
[+] [-] ExcavateGrandMa|5 years ago|reply
[deleted]