Using ngrok to access multiple homestead sites remotely

ngrok can be used to provide access to a local homestead or vagrant site remotely i.e. to a client.

You need to use header rewriting to work with the homestead configuration, the syntax for using a single site looks like:

ngrok http -host-header=rewrite mysite.app:80

If you have multiple sites then you will need to use ngrok with a config file, stored in ~/.ngrox.config.yml. The docs are vague on how to do the rewriting in the config file so here it is for reference:

tunnels:
    mysite:
        addr: mysite.app:80
        proto: http
        host_header: rewrite
    myapi:
        addr: myapi.app:80
        proto: http
        host_header: rewrite

Note that host header rewriting doesn’t work nicely with cookies which seriously limits this.

See http://stackoverflow.com/questions/41523847/fail-to-create-cookies-while-using-ngrok-with-header-rewrite

Stop postfix from sending email for testing

This applies to CentOS 6. You may want to do this when testing some code and you’re not sure if it is going to send emails or not.

Edit /etc/postfix/main.cf

Add following lines at the bottom:

myhostname = localhost
mydomain = localdomain
inet_interfaces = $myhostname, localhost
mydestination = $myhostname, localhost.$mydomain, localhost
mynetworks_style = host
default_transport = error:outside mail is not deliverable

Save the file then run the following commands (as root)

postfix upgrade-configuration
postfix check
newaliases

Test by mailing something to yourself

mail username@example.com

Check that it just gets put in the root mailbox

mail

Enter the number of the email you want to read and it appears, d to delete it, z to return to the list, q to quit.

Running Laravel on shared hosting subdomain

Running Laravel 5 on a shared host subdomain (I use Vidahost) is a little daunting because Laravel requires the web root to point to the /public folder and generally with a subdomain the website root is the root folder that is created for you.

My solution was to create a directory in the subdomain root folder and copy all the code into there. I then copied the contents of the /public folder into the subdomain root folder and edited index.php.

The two require lines need modifying to remove the ‘..’ characters and replace with the actual path.

//require __DIR__.'/../bootstrap/autoload.php';
require __DIR__.'/mysubdirectory/bootstrap/autoload.php';

It’s not pretty but it worked OK.

The little application I wrote is to help with non-verbal reasoning tests, to memorise the numeric equivalents of the alphabet: Alphabet to Numbers.

Folder structure for running Laravel from site root folder
Folder structure for running Laravel from site root folder

Vagrant notes

Stop box checking for updates

If you want to stop your VM from checking for updates to the Vagrant box add the following immediately after the Vagrant.configure line:

# don't check for VM updates
config.vm.box_check_update = false

Update Guest Additions

There is a plugin: vagrant-vbguest which will check if your VirtualBox Guest Additions are out of date in your VM and automatically update if necessary. You can install it with:

vagrant plugin install vagrant-vbguest

Once the Guest Additions have been installed you may want to use the following to prevent further updates (add just after the Vagrant.configure line):

# don't update guest additions
config.vbguest.auto_update = false

Better synced folder permissions

Instead of the default synced folder settings which may cause problems when your server tries to change the files (e.g. WordPress updating itself) I use the following with Ubuntu:

config.vm.synced_folder "./", "/vagrant", id: "vagrant-root",
owner: "vagrant",
group: "www-data",
mount_options: ["dmode=775,fmode=664"]

If you are using CentOS then the group should be apache instead of www-data.

Magento performance

You can boost Magento performance (or any complicated PHP app) when running from a VM by changing the PHP OPcache revalidate frequency. It defaults to 2 seconds which means when you are navigating a site all the PHP files are recompiled with every click. With tens of thousands of PHP files that’s a hefty penalty.

Changing this to something like 20 seconds means you’ll be using cached code. Do this with:

sudo nano /etc/php5/apache2/php.ini

and set

opcache_revalidate_freq = 20

Easy Linux MySQL default configuration on cloud server

When you initially install MySQL on a cloud server (Ubuntu 14.04) the /etc/mysql/my.cnf file is configured to work with only 32M RAM. This is pretty crazy when you consider that most cloud servers have at least 1GB of RAM, it could be holding your website up so it is something you should consider changing.

If you want a quick way of boosting the MySQL performance without having to tune the configuration, then a number of pre-built configurations are stored in: /usr/share/doc/mysql-server-5.5/examples/

my-huge.cnf is for a system with memory of 1G-2G.

It would be nice if you could just copy this over my.cnf and restart MySQL but that doesn’t work. You’ll get the message start: Job failed to start

What needs changing to make this work on a recently installed system?

Edit the [mysqld] section, add user = mysql and that’s all you need to do if you are using MyISAM.

If you want to use InnoDB then remove the commented out section on Replication Slave (it’s unnecessary and just complicates understanding the file)

Uncomment the InnoDB section but leave the following line commented out:

#innodb_data_file_path = ibdata1:2000M;ibdata2:10M:autoextend

because by default it uses: innodb_data_file_path = ibdata1:10M:autoextend which is auto-extending.

Finally before starting MySQL you need to delete the log files with rm /var/lib/mysql/ib_logfile* because these details have changed.
That’s all you need to do. You’ll now be able to enjoy the extra performance of having a MyISAM key buffer of 384MB and an InnoDB buffer pool of 384MB.