foxfirefey: A guy looking ridiculous by doing a fashionable posing with a mouse, slinging the cord over his shoulders. (geek)
[personal profile] foxfirefey
I come bearing two useful utilities that each help you do things with the web from the command line:

* jq is a wonderful utility for working with JSON data, similar to how you can use awk for line-by-line text
* pup helps you parse HTML as a utility, inspired by jq
foxfirefey: A guy looking ridiculous by doing a fashionable posing with a mouse, slinging the cord over his shoulders. (geek)
[personal profile] foxfirefey
WP-CLI is a set of command-line tools for managing WordPress installations. You can update plugins, set up multisite installs and much more, without using a web browser.


They've even made it easy to extend!
jadelennox: Demonic Tutor, Jadelennox: my Magic card (demonic tutor)
[personal profile] jadelennox
Little trick I do all the time when I'm looking for a string in a source tree:

find . -type f -exec egrep -H "string I'm seeking" {} \;

The things that vary:

find 
     base command, doesn't change
. 
     the top directory you're seeking in.  Could put a relative or absolute path here.
-type f 
     Narrowing down the kind of files you're seeking, in this case to regular files. 
     You could narrow it down in other ways, e.g. -name "*.c"
-exec egrep -H "string I'm seeking" {} \;
     the grep command in question. 
     The only thing I have a very here is the egrep switch (and the string).
     -H lists filename and the match
     -l just lists the filename; less informative but cleaner.
     {} \; is necessary syntax for the exec argument to find.
jadelennox: Demonic Tutor, Jadelennox: my Magic card (demonic tutor)
[personal profile] jadelennox
So a fun shell trick (works in bash, tcsh, zsh) involves curly brace expansion. Different shells have different complexities, but the simplest case that works in all shells is

mv movies-19{92,93}.txt

which expands

mv movies-1992.txt movies-1993.txt

I use this daily. Usually to say mv foo.sh{,.bak} which translates to mv foo.sh foo.sh.bak
shadowspar: Side-on picture of a Commodore 64c computer (commodore 64c)
[personal profile] shadowspar
For some reason, I'd gotten the idea into my head that sed(1) is really arcane and difficult to use -- so much so that it wasn't really worth spending the time to learn it. But as I've started finding tiny uses for it, I'm starting to get the impression that it's one of those tools where knowing 10% of its syntax gets you 90% of its benefit.

I often have to run rm -r on a huge tree of thousands of small files. It'd be nice to know how far along things are, but rm -rv shows every file deletion, which is a ton of output -- enough to overwhelm a slow terminal link. Wouldn't it be great if you could print, say, every hundredth line or so?

Enter sed:

rm -rv snapshot/foo | sed -n '0~100p'

sed usually prints every line as it processes it, but -n suspends that -- lines are only printed if you explicitly ask for them with p. And then 0~100 means "operate only on every 100th line". (1~100 would be "line 1, then lines 101, 201, ...".)

Another easy win with sed is being able to feed text through a regex substitution. F'rinstance, lynx -dump URL dumps a web page out as text, with a list of links at the bottom, like this:

References

   1. http://example.org/lolcats-page-2/
   2. http://example.org/lolcats/lolcat-101.gif
   3. http://example.org/lolcats/lolcat-102.gif


If you wanted to download all the animated cat gifs, you'd want to feed that list of links to something like wget, without the leading numbers in the way. But it just so happens that the leading numbers and spaces always add up to exactly six characters...so an (admittedly convoluted) way to do that is...

lynx -dump http://example.org/lolcats/ | grep http | grep gif | sed 's/^......//' | wget -nc -i -

Anyway...in combination with all the other commands one can string together in a pipeline, I'm sure sed can be turned to a lot of other creative uses!
foxfirefey: A picture of GIR. (gir)
[personal profile] foxfirefey
As relayed by [personal profile] themalkolm:
The best ping story I've ever heard was told to me at a USENIX conference, where a network administrator with an intermittent Ethernet had linked the ping program to his vocoder program, in essence writing:

ping goodhost | sed -e 's/.*/ping/' | vocoder

He wired the vocoder's output into his office stereo and turned up the volume as loud as he could stand. The computer sat there shouting "Ping, ping, ping..." once a second, and he wandered through the building wiggling Ethernet connectors until the sound stopped. And that's how he found the intermittent failure.
foxfirefey: A guy looking ridiculous by doing a fashionable posing with a mouse, slinging the cord over his shoulders. (geek)
[personal profile] foxfirefey
This is a really cute bash prompt creator (ironically a GUI):

http://xta.github.com/HalloweenBash/
tonybaldwin: tony baldwin (Default)
[personal profile] tonybaldwin
This script allows you to post to friendica, with xposting options, using bash, curl and vim.

#!/bin/bash

# update friendica with bash, vim and curl
# I put this in my path as "friendi.sh"
# by tony baldwn, http://tonybaldwin.me
# on friendica at http://free-haven.org/profile/tony
# released according to the Gnu Public License, v.3

# first, create a post/update

filedate=$(date +%m%d%y%H%M%S)

# if you did not enter text for update, the script asks for it

if [[ $(echo $*) ]]; then
ud="$*"
else
vim $filedate.fpost
ud=$(cat $filedate.fpost)
fi

# now to see if you want to crosspost elsewhere
echo "For the following question regarding crossposting, please enter the number 1 for yes, and 0 for no."
echo "If your friendica has the plugins, and you've configured them, you can crosspost to other blogs and sites."
echo "friendica will even automatically change the bbcode to proper html for you."
echo "would you like to crosspost to "
read -p "statusnet? " snet
read -p "twitter? " twit
read -p "facebook? " fb
read -p "dreamwidth? " dw
read -p "livejournal? " lj
read -p "tumblr? " tum
read -p "posterous? " pos
read -p "wordpress? " wp

# now to authenticate
read -p "Please enter your username: " uname
read -p "Please enter your password: " pwrd
read -p "Enter the domain of your Friendica site (i.e. http://friendica.somesite.net, without trailing /): " url

# and this is the curl command that sends the update to the server

if [[ $(curl -u $uname:$pwrd -d "status=$ud&ljpost_enable=$lj&posterous_enable=$pos&dwpost_enable=$dw&wppost_enable=$wp&tumblr_enable=$tum&facebook_enable=$fb&twitter_enable=$twit&statusnet_enable=$snet&source=friendi.sh" $url/api/statuses/update.xml | grep error) ]]; then

# what does the server say?

echo "Error"
else
echo "Success!"
echo $ud
fi


find me on friendica at tony@free-haven.org.
see more of my scripts, in bash, tcl, python, and more, at http://tonybaldwin.me/hax
sophie: A cartoon-like representation of a girl standing on a hill, with brown hair, blue eyes, a flowery top, and blue skirt. ☀ (Default)
[personal profile] sophie
Sometimes you want to paste the output of a grep command into IRC or IM, and don't want each match on a separate line. Fortunately, it's easy to convert it to a comma-separated list instead - simply pipe the output through xargs echo | sed 's/ /, /g'. So, for example, instead of:

Sophie@Sophie-Laptop:~/primtionary$ grep cuddle american-english-insane
cuddle
cuddleable
cuddled
cuddler
cuddlers
cuddles
cuddlesome
scuddle
scuddled
scuddles
upscuddle


...you get:

Sophie@Sophie-Laptop:~/primtionary$ grep cuddle american-english-insane | xargs echo | sed 's/ /, /g'
cuddle, cuddleable, cuddled, cuddler, cuddlers, cuddles, cuddlesome, scuddle, scuddled, scuddles, upscuddle


Very useful sometimes :D
sophie: A cartoon-like representation of a girl standing on a hill, with brown hair, blue eyes, a flowery top, and blue skirt. ☀ (Default)
[personal profile] sophie
Just a quickie - today I learned that instead of piping something through sort and then piping the output of that through uniq, you can just use sort -u, which will do both operations at once.

Unfortunately this doesn't help when you're doing something like sort | uniq -c | sort -n to sort by the number of times a line appears, but it's still a nice tip. :)
foxfirefey: Fox stealing an egg. (Default)
[personal profile] foxfirefey
UNIX tips: Learn 10 good UNIX usage habits, as follows:

  • Make directory trees in a single swipe.
  • Change the path; do not move the archive.
  • Combine your commands with control operators.
  • Quote variables with caution.
  • Use escape sequences to manage long input.
  • Group your commands together in a list.
  • Use xargs outside of find .
  • Know when grep should do the counting -- and when it should step aside.
  • Match certain fields in output, not just lines.
  • Stop piping cats.


I learned a couple of things from this one!
[personal profile] tara_hanoi
Cross-posted from my main blog

I'm a fan of twitter, and one of the features of the territory is shortened links. The twitter web interface generally does a half-decent job of expanding these posts, but I don't always get to see the expanded link.

Sometimes I want to see where the link leads without actually visiting the site in case it's hosting malicious scripts. After a little bit of digging I found that wget can do exactly what I want. I have a little throwaway directory (in my case, '/export/home/sketchy', which allows me to see where the link points to.

For example, let's say I see a link (it leads to a blog post of mine, so nothing very interesting) and want to know where it leads:

tara_hanoi@tara_babel:/export/home/sketchy$ wget --max-redirect=0 http://t.co/8xED8dz
--2011-07-25 16:01:10-- http://t.co/8xED8dz
Resolving t.co... 199.59.148.12
Connecting to t.co|199.59.148.12|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: http://bit.ly/guxtdh [following]

0 redirections exceeded.


The bit in bold is where you should be paying attention. In this case, the t.co link points to another shortener, bit.ly - if I want to follow that on, I don't have to paste that back into wget, I can just increase the 'max-redirect' parameter:

tara_hanoi@tara_babel:/export/home/sketchy$ wget --max-redirect=1 http://t.co/8xED8dz
--2011-07-25 16:03:50-- http://t.co/8xED8dz
Resolving t.co... 199.59.148.12
Connecting to t.co|199.59.148.12|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: http://bit.ly/guxtdh [following]
--2011-07-25 16:03:51-- http://bit.ly/guxtdh
Resolving bit.ly... 168.143.172.53, 2001:418:e801::12:1, 2001:418:e801::15:1, ...
Connecting to bit.ly|168.143.172.53|:80... connected.
HTTP request sent, awaiting response... 301 Moved
Location: http://tara-hanoi.dreamwidth.org/1508.html [following]

1 redirections exceeded.


Ok, so it leads to my own DW entry, so I'm pretty sure it's ok.

This might be useful for folks as a quick and dirty way to expand shortened URLs.
chebe: (Default)
[personal profile] chebe
Hey all, don't think I've seen this here before, and I know there are many ways of doing this, so I thought, in a fit of geekery, that it might be fun to try and collect them all! Afterall everyone does have their own favourite programs.

I posted this at my own journal when I first came across it, but here's a new one as well;

sed -i 's/\r//' $file_name

perl -pi -e 's/\r\n/\n/g' $file_name
pixel: Dreamwidth: Home is where the heart is (dw: homeis<3)
[personal profile] pixel
Using UbuntuOne to sync dotfiles across several, possibly remote machines. I don't know if this would work with say, Dropbox, because I've never used it but I guess it could be adapted to any of those cloud services if you can access them from the command line, and most importantly make a symbolic link to that directory.

The genesis for this is the fact that I like to use a)taskwarrior and b)my netbook. So I happen to have multiple computers that run Ubuntu and maybe I'm in the minority in that. I wanted my tasks to be available wherever I was. I still haven't figured out how to pull them onto my Android phone but I'll keep working on that.

Taskwarrior uses some text files stored in ~/.task/ by default. I initially attempted to just sync the directory but that won't work. At some point I asked in launchpad I think and never got an answer.

In the end I got it working like so... )
brownbetty: Silouhette of figure, vines, planet in background (Explorer)
[personal profile] brownbetty
I want a command line text editor that can do a soft-word wrap. By "soft," I mean that long lines won't scroll past the right edge of my screen, but when I save my file, I won't discover it's had a mess of line breaks inserted into my text every seventy-odd chars.

I'd be mostly writing in natural English, so emacs or vi are rather more complicated than I'm looking for, but if you tell me one of them is my only solution, I will cry and then suck it up and learn to use whichever.
sophie: A cartoon-like representation of a girl standing on a hill, with brown hair, blue eyes, a flowery top, and blue skirt. ☀ (Default)
[personal profile] sophie
Most people in here will probably know how to pipe output from one command to another:

command 1 | command2

However, what if command2 doesn't allow reading from standard input, and only supports filenames? How can you do this without writing to a file?

It turns out that you can do this in bash and ksh by using the <(command) syntax. For example, the above command can be written:

command2 <(command1)

This will execute command1 in a subshell, and at the same time, call command2 with a file descriptor looking something like /dev/fd/63. When command2 reads from that, it'll get the output of command1.

At first this doesn't seem too useful, but this means that you can do nifty things like this:

diff -u <(sort filea.txt) <(sort fileb.txt)

Which will sort filea.txt and fileb.txt, and then diff the outputs - all without writing a single file.

Note that if the subshells require user input, this isn't going to work, so you can't use this to capture user input and pass it to a script which would otherwise require a filename. However, as long as this isn't the case, everything should work smoothly.

[edit: Oh, and I should mention that, unlike piping, you can execute several commands in the subshell. For example:

rev <(echo wheeness; sleep 2; echo blarg)

This also demonstrates how both the subshell and the main process run simultaneously; the output is "sseneehw", followed by a delay of 2 seconds, followed by "gralb".]

[edit 2: See this comment for an example of how to run multiple commands via piping!]
sophie: A cartoon-like representation of a girl standing on a hill, with brown hair, blue eyes, a flowery top, and blue skirt. ☀ (Default)
[personal profile] sophie
If you've ever tried using xargs with a list of filenames, you've probably at some point come across errors like these:

xargs: unmatched single quote; by default quotes are special to xargs unless you use the -0 option

ls: cannot access /home/sophie/directory: No such file or directory
ls: cannot access with: No such file or directory
ls: cannot access spaces: No such file or directory
ls: cannot access in/blah.txt: No such file or directory


Both of these errors are due to xargs by default interpreting some characters as special; in the first one, it won't allow apostrophes, and in the second it treats spaces as if they meant separate arguments. And the -0 switch doesn't help at all if you're using newlines.

Both of these errors can be resolved very easily: just add the -d"\n" switch before the command you want xargs to execute. This tells xargs that you don't want xargs to mess with your input at all except to treat newline characters as delimiters. This time, you should find that both apostrophes and spaces are accepted properly.
karmag: Stylized face based on Dreamwidth logo (Default)
[personal profile] karmag

Hey, a command line community? That's a pretty neat thing to have.

So to introduce myself, here's something I figured out pretty recently. It's in one of those eyes-glazing-over parts of the shell man page (but I had a specific problem to solve so I had to figure it out). Behold! It is a piece of "parameter expansion" magic:

$ TEST='foo:bar' ; echo ${TEST%:*} ${TEST#*:}
foo bar

Ooh, exciting! No wait, it's not. But here's the problem I had: How can I loop over pairs of values in a script or one-liner?

The solution I came up with looked something like this:

$ for f in url1:filename1 url2:filename2 url3:filename3;
do URL=${f%:*}; FILENAME=${f#*:} ;
...
; done

...and so for each pair, URL and FILENAME gets set to their respective portion. (And you'll be glad to learn that I got my Youtube-mediated crime drama fix in the end.)

Page generated Dec. 7th, 2016 08:32 am
Powered by Dreamwidth Studios