Taking a PNG Screenshot of a Webpage from PHP + Apache on Ubuntu Server

While it may not be terrible useful, I decided that I had to know how to generate a screenshot of a webpage from my own webpage. Please note that THIS OPENS EXTRA SECURITY RISKS and SHOULD ONLY BE DONE ON AN ISOLATED SERVER where NOTHING OF VALUE CAN BE LOST

** If this is running on Ubuntu Server, obviously you need to install gnome to make this work:
sudo apt-get ubuntu-desktop
sudo shutdown -r now
export DISPLAY=:0.0

Install gnome-web-photo
sudo apt-get install gnome-web-photo

Login with www-data
sudo su www-data

** If you receive this error:
This account is currently not available.
You need to recreate user www-data:
service apache2 stop
deluser www-data
delgroup www-data
adduser www-data
service apache2 restart

Try to run gnome-web-photo
gnome-web-photo http://google.com ~/test.png
No protocol specified

** (gnome-web-photo:13000): WARNING **: Could not open X display

No protocol specified
Cannot open display: 
Usage: gnome-web-photo [--mode=photo|thumbnail|print] [...]

It looks like our little friend apache cannot connect to our xserver. That's an easy fix, we just need to add user www-data to the privileged users who can access the active display.

With an account that can access the display:
xhost +SI:localuser:www-data

With www-data account:
export DISPLAY=":0.0"
gnome-web-photo http://google.com ~/test.png

It should work now! You can use exec() and shell_exec() to now take browser shots from a PHP page. Go ahead, run your own browershots website.
exec("export DISPLAY=:0.0;gnome-web-photo http://google.com $dir/fuck.png");


Take a Browser Screen Shot of a Webpage and Convert it to a PNG Image in Ubuntu 13.10

Before I start ranting about all of the impossible ways that people claim should work, here is actual solution:

sudo apt-get install gnome-web-photo
gnome-web-photo http://google.com gnome-web-photo.png

Why was that so difficult, internet? Why so many options that do not work? I blame the google algorithms that place value on the oldest links. The oldest method is seldom the best when technology is concerned.

If the above doesn't work, here is a backup method:

sudo apt-get install wkhtmltopdf imagemagick
wkhtmltopdf www.google.com google.pdf
convert -quality 100 google.pdf google.jpg


Now let's talk about about the stuff that doesn't work.

wkhtmltoimage
Does this work on your OS? I just get seg faults.

khtml2png
This hasn't been hip in 4+ years and clearly doesn't run on modern Ubuntu installations.

webkit2png and PyWebShot
Both of these supposedly work great... if you're on a Mac. Why are you on a Mac? I have no idea. Does anyone run a Mac server? No. So what is the point of programatically taking a browser shot on them anyways? Who cares.
Apparently Apple bought some company that owned some code or something that webkit python bindings need or whatever who cares. The short story is that you cannot install pyobjc on Linux because of Nazism because you get some horrible pip error:
ValueError: invalid literal for int() with base 10: ''
So basically, it's looking for your OSX version but can't find it because you aren't stupid enough to be using a Mac. Every day I get more glad Steve Jobs died an untimely death. What a prick.

Full Text Searching InnoDB Tables on MySQL 5.5 with Trigger-based Table Syncing

As you may be aware, MySQL 5.6 supports full text searches on InnoDB tables, but few are willing to upgrade yet because 5.6 is currently unstable and not yet suitable for production servers. This means that the only way to use full text searching in MySQL is to put this data into a MyISAM table. Unfortunately MyISAM tables suffer from a number of faults, the most worrisome being data integrity.

The solution for me is to use InnoDB tables for everything, but duplicate the searchable data into a MyISAM table. I then use MySQL triggers to keep the MyISAM table's data automatically synced up with the InnoDB. If the MyISAM table gets corrupted, I can wipe it and copy the data back over with extreme ease.

To demonstrate, assume that I have a table called `links` with all sorts of columns and rows in it. Of these columns, I want the title and description to be searchable.

The first step is to create a separate MyISAM table to store the searchable data. Make sure the column definitions are the same, but only include the primary key and the rows you want to search.

CREATE TABLE `links_searchable` (
    `id` int(10) unsigned NOT NULL,
    `title` varchar(100) NOT NULL,
    `description` varchar(1000) NOT NULL
) ENGINE=MyISAM;

The next step is to create a fulltext index on the searchable columns, so that MySQL will allow the MATCH() queries to be executed on these columns.

create fulltext index `ft` on links_searchable (title,description);

If your `links` table has any data in it, go ahead and copy it over now

insert links_searchable (id,description,title) select id,description,title from links;

Now create the MySQL triggers so that you don't have to write extra code to keep these tables synced. Once these triggers are created, you can forget about this extra table until it's actually time to search.

CREATE TRIGGER ins_searchable AFTER INSERT ON `links`
    FOR EACH ROW
        insert into links_searchable (id,description,title) values(NEW.id,NEW.description,NEW.title);

CREATE TRIGGER del_searchable AFTER DELETE ON `links`
FOR EACH ROW
DELETE FROM links_searchable WHERE id=old.id;

CREATE TRIGGER upd_searchable AFTER UPDATE ON `links`
FOR EACH ROW
    UPDATE links_searchable SET title=NEW.title, description=NEW.description WHERE id=NEW.id;

At this point, you should be able to run full text searches on your data. Here is a MATCH() query that returns all of the relevant links data. The results are automatically sorted by relevance.

select links.* from links, links_searchable where MATCH (links_searchable.title,links_searchable.description) AGAINST ('query') and links.id=links_searchable.id

Now, if you are like me, you hate stopwords. Disabling them is pretty easy in Ubuntu. Simply add or update these lines in your /etc/mysql/my.cnf under [mysqld]

ft_stopword_file = ""
ft_min_word_len = 1

Then restart MySQL

sudo service mysql restart

And finally, reindex your searchable table

repair table links_searchable;

It's that easy. Enjoy all of the benefits of both MyISAM and InnoDB by using both at the same time :)

Install Android SDK on Ubuntu 13.10

Download and unzip to ~/adt
cd ~
wget http://dl.google.com/android/adt/adt-bundle-linux-x86-20140702.zip
unzip adt-bundle-linux-x86-20140702.zip
rm adt-bundle-linux-x86-20140702.zip
mv adt-bundle-linux-x86-20140702 adt

Install dependencies
sudo apt-get install openjdk-7-jre openjdk-7-jdk

Include the /sdk/tools and /sdk/platform-tools folders in your environment path
echo 'export PATH=${PATH}:~/adt/sdk/tools'|cat - ~/.bashrc > /tmp/out && mv /tmp/out ~/.bashrc
echo 'export PATH=${PATH}:~/adt/sdk/platform-tools'|cat - ~/.bashrc > /tmp/out && mv /tmp/out ~/.bashrc
source ~/.bashrc

run the /sdk/tools/android binary and install updates
android

run the android emulator to make sure everything is working
android avd

Gmail 'Send Mail As' SMTP Setup (Ubuntu & Postfix)

Gmail used to let us send mail from non-gmail addresses using the gmail servers. For some reason, they decided to disable this feature and force us to set up our own SMTP servers for outbound mail.

This guide will cover how to set up Postfix with SASL authentication on Ubuntu and avoid the dreaded "We are having trouble authenticating with your other mail service. Please try a different port or connection option" error.

First let's install postfix and sasl and make sure postfix is running
sudo apt-get install postfix libsasl2-2 sasl2-bin libsasl2-modules
sudo service sendmail stop
sudo service postfix restart

You may want to set up postfix to receive / forward mail or whatever else you want to do with it. I usually set up a virtual configuration that simply forwards mail to my gmail account.

At this point, you should have postfix running and listening to port 25, but if you try to configure gmail to send through it, you will get some stupid error like this "We are having trouble authenticating with your other mail service. Please try a different port or connection option". This is because gmail wants some sort of secure authentication, and you have not set that up yet.

So the next step is to create an SSL certificate for SASL, which is a huge pain, but you can simply past this in line by line and type in whatever makes you happy when it asks you questions. Make sure you write down the certificate password and use the same one every time it asks.

mkdir /etc/postfix/ssl
cd /etc/postfix/ssl/
openssl genrsa -des3 -rand /etc/hosts -out smtpd.key 1024
chmod 600 smtpd.key
openssl req -new -key smtpd.key -out smtpd.csr
openssl x509 -req -days 3650 -in smtpd.csr -signkey smtpd.key -out smtpd.crt
openssl rsa -in smtpd.key -out smtpd.key.unencrypted
mv -f smtpd.key.unencrypted smtpd.key
openssl req -new -x509 -extensions v3_ca -keyout cakey.pem -out cacert.pem -days 3650


Cool beans. Now you just have to configure postfix to use SASL. You can paste this all in at once. It should not ask you any questions.

sudo postconf -e 'smtpd_sasl_local_domain ='
sudo postconf -e 'smtpd_sasl_auth_enable = yes'
sudo postconf -e 'smtpd_sasl_security_options = noanonymous'
sudo postconf -e 'broken_sasl_auth_clients = yes'
sudo postconf -e 'smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination'
sudo postconf -e 'inet_interfaces = all'
sudo postconf -e 'smtp_tls_security_level = may'
sudo postconf -e 'smtpd_tls_security_level = may'
sudo postconf -e 'smtpd_tls_auth_only = no'
sudo postconf -e 'smtp_use_tls = yes'
sudo postconf -e 'smtpd_use_tls = yes'
sudo postconf -e 'smtp_tls_note_starttls_offer = yes'
sudo postconf -e 'smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key'
sudo postconf -e 'smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt'
sudo postconf -e 'smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem'
sudo postconf -e 'smtpd_tls_loglevel = 1'
sudo postconf -e 'smtpd_tls_received_header = yes'
sudo postconf -e 'smtpd_tls_session_cache_timeout = 3600s'
sudo postconf -e 'tls_random_source = dev:/dev/urandom'
sudo postconf -e 'home_mailbox = Maildir/'
sudo postconf -e 'mailbox_command ='
echo 'pwcheck_method: saslauthd' >> /etc/postfix/sasl/smtpd.conf
echo 'mech_list: plain login' >> /etc/postfix/sasl/smtpd.conf
sudo service postfix restart

Now we need to configure SASL so it starts up automatically and change some things so that postfix can communicate with it.

sudo sed -i 's/START\=no/START\=yes/g' /etc/default/saslauthd
sudo sed -i 's/\/var\/run\/saslauthd/\/var\/spool\/postfix\/var\/run\/saslauthd/g' /etc/default/saslauthd
sudo echo 'PWDIR="/var/spool/postfix/var/run/saslauthd"' >> /etc/default/saslauthd
sudo echo 'PARAMS="-m ${PWDIR}"' >> /etc/default/saslauthd
sudo echo 'PIDFILE="${PWDIR}/saslauthd.pid"' >> /etc/default/saslauthd
sudo dpkg-statoverride --force --update --add root sasl 755 /var/spool/postfix/var/run/saslauthd
sudo service saslauthd restart

Wonderful! Now you can use the 'adduser' command on your server to set up users that correspond to whatever email accounts you want to send mail as. Then you simply go into the gmail settings and use the users and passwords that you set up with adduser.

The SMTP hostname is whatever you have set up as the MX record in the DNS settings for your hostname. I usually just use the @ record, but some people really like subdomains, so more power to them. The port should be 25, and TLS should be set up and working out of the box. It is highly recommended that you use that.

That should cover the basics. It is important to note that you do not need to buy an SSL certificate like you would need to do with a webserver. Generating your own works just fine. Setting up a mail server should cost you nothing but 30 minutes of work. It should now surprise you that companies like GoDaddy charge so much money for something so simple.

I will cover receiving mail in another post, but that is straight forward as well. You can either set up postfix forwarding or use dovecot for pop or imap. Another alternative is to set up an open source mail interface on your server like RoundCube. I just find gmail's convenience worth the privacy invasion most of the time, expecially when you have 30 email addresses to listen on.

BFL ASIC CGMINER INSTALL UBUNTU 13.10

Just run this little script:

sudo apt-get update
sudo apt-get -y upgrade
sudo apt-get -y install git make libtool autoconf libcurl4-openssl-dev libudev-dev
git clone https://github.com/ckolivas/cgminer.git
cd cgminer
sudo ./autogen.sh
sudo ./configure --enable-bflsc
sudo make
sudo make install
sudo cgminer


Here is a the same stuff with a better explanation + cron setup

Free IP Locator API - Convert IP Addresses to Zip Codes with PHP

If you just want a free site to use:
IP Address Locator with Map


If you want a quick PHP solution, try this:

$ip = 'xxx.xxx.xxx.xxx';
var_dump(json_decode(file_get_contents("https://freegeoip.net/json/$ip"),true));



If you want quick responses, you can install the freegeoip server on ubuntu. This will give you your own personal IP locator service to use privately (or publicly if you are a nice guy):

Install golang 1.1.1
--------------------

sudo apt-get install python-software-properties
sudo add-apt-repository ppa:duh/golang
sudo apt-get update
sudo apt-get install golang
go version

Set your go environment variables
---------------------------------

export GOROOT=/usr/lib/go

Install freegeoip
-----------------

git clone https://github.com/fiorix/freegeoip.git
cd freegeoip
go build

Download the database
---------------------

cd db
./updatedb
file ipdb.sqlite

Run it
------

cd ..
./freegeoip

cron script to keep the server running (replace $FREEGEOIP_PATH)
----------------------------------------------------------------

#!/bin/bash
ps cax | grep -v grep | grep freegeoip > /dev/null
result=$?
echo "${result}"
if [ "${result}" -eq "0" ] ; then
        echo "freegeoip is already running" #>>/home/work/cgcron
else
        cd $FREEGEOIP_PATH
        ./freegeoip &
fi

cron script to update the database regularly (replace $FREEGEOIP_PATH)
----------------------------------------------------------------------

#!/bin/bash
cd $FREEGEOIP_PATH/db
./updatedb &



Install CPU Miner on Ubuntu and Mine FTC

# install dependencies
sudo apt-get install autoconf automake libcurl4-nss-dev gcc git

# download cpuminer
git clone https://github.com/pooler/cpuminer

# cd to directory
cd cpuminer

# generate make files
./autogen.s
./configure CFLAGS="-O3"

# install
sudo make install

# run on wemineftc mining pool
sudo minerd -o stratum+tcp://stratum.wemineftc.com:4444 -u [username].[worker] -p [worker password]

#cron script to keep you mining whenever your computer is on
sudo vi /usr/sbin/mine_cron
------------------------------------------------------
#!/bin/bash
ps cax | grep -v grep | grep minerd > /dev/null
result=$?
echo "${result}"
if [ "${result}" -eq "0" ] ; then
        echo "minderd is already running" #>>/home/work/cgcron
else
        sudo minerd -o stratum+tcp://stratum.wemineftc.com:4444 -u [username].[worker] -p [worker password]
fi
-------------------------------------------------------
sudo chmod a+x /usr/sbin/mine_cron

#run it however often you like
crontab -e
--------------------------------------------------------
# m h  dom mon dow   command
  * *  *   *   *     /usr/sbin/mine_cron
--------------------------------------------------------


Why Does CodeIgniter Set Cookie Not Work?

Well, because CodeIgniter is garbage, and you should probably be using Yii.

If you want to fix CodeIgniter, open up system/core/Input.php and scroll down to line 248 to see this line:

$expire = time() - 86500;

Well that's silly ... someone decided to set the cookie's expiration date to yesterday!

If you change the '-' to a '+', your cookies will be good for 24 hours. If you want your cookie to be good for 40+ years, try this:

$expire = time() * 2;

This bug may have been fixed in later versions, but it certainly was not in mine.