Author Archives: philglau

Compiling MEX files using Mavericks OS X 10.9 and Matlab R2012a with Xcode 6.x

To compile MEX C and C++ files for Matlab R2012a using a Mac 10.9 Mavericks install with XCode 6.1, you will need to change the mexopts.sh settings found within the bin folder of the Matlab application.

cd /Applications/MATLAB_R2012a_Student.app/bin
ls
// make a copy of the original mexopts.sh file
cp mexopts.sh orig_mexopts.sh

Open the mexopts.sh file and search for SDK. On about line 167 you’ll find:

CC='gcc-4.2'
SDKROOT='/Developer/SDKs/MacOSX10.6.sdk'
MACOSX_DEPLOYMENT_TARGET='10.5'
ARCHS='x86_64'
CFLAGS="-fno-common -no-cpp-precomp -arch $ARCHS -isysroot $SDKROOT -mmacosx-version-min=$MACOSX_DEPLOYMENT_TARGET"
CFLAGS="$CFLAGS -fexceptions"
CLIBS="$MLIBS"
COPTIMFLAGS='-O2 -DNDEBUG'
CDEBUGFLAGS='-g'
#
CLIBS="$CLIBS -lstdc++"
# C++keyName: GNU C++
# C++keyManufacturer: GNU
# C++keyLanguage: C++
# C++keyVersion:
CXX=g++-4.2

I changed it to the following:

#CC='gcc-4.2'
#### Changed this line ####
CC='gcc'
#SDKROOT='/Developer/SDKs/MacOSX10.6.sdk'
#### Changed this line ####
SDKROOT='/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.9.sdk'
MACOSX_DEPLOYMENT_TARGET='10.9'
ARCHS='x86_64'
CFLAGS="-fno-common -no-cpp-precomp -arch $ARCHS -isysroot $SDKROOT -mmacosx-version-min=$MACOSX_DEPLOYMENT_TARGET-Dchar16_t=uint16_t"
CFLAGS="$CFLAGS -fexceptions"
CLIBS="$MLIBS"
COPTIMFLAGS='-O2 -DNDEBUG'
CDEBUGFLAGS='-g'
#
CLIBS="$CLIBS -lstdc++"
# C++keyName: GNU C++
# C++keyManufacturer: GNU
# C++keyLanguage: C++
# C++keyVersion:
#CXX=g++-4.2
#### Changed this line ####
CXX=g++

Note that I specifically changed:

  • SDKROOT to the full path of the 10.9 SDK inside the XCode bundle
  • CC got changed from “gcc-4.2” to just “gcc”
  • CXX got changed from “g++4.2” to just “g++”
  • added -Dchar16_t=uint16_t to the end of CFLAGS to over come the fact the char16_t isn’t a native type.

From within the  /Applications/MATLAB_R2012a_Student.app/bin folder I ran ‘mex -setup’ and selected the revised version of mexopts.sh.

Seems to work. Should probably work with 2012a and 2012b but I don’t know.

Removing 2014 Quickbooks Payroll Liability Reminder

There is currently an unfixed bug in Quickbooks 2014 which Intuit doesn’t seem very motivated to fix. (It has been ongoing since October 2013 by some accounts.)

The bug involves Payroll Liability payments show as ‘unpaid’/’unprinted’ even after they have been submitted for E-Payment through Quickbooks payroll service.

This post gives a visual walkthrough on how I applied some of the comments on the link above to resolve the issue for our purpose.

Step 1:

Find the e-payment in question in your register for which you are seeing the ‘unprinted’ checks reminder.

Select Problem E-Pay item

Select Problem E-Pay item

Step 2:

Void this E-Payment by either right-clicking and selecting “Void Liability Check” or from the Edit menu.

Void out the existing E-Payment from your register.

Void out the existing E-Payment from your register.

Step 3: Receive a warning from Quickbooks. I’m not 100% clear on when an E-Payment will or will not have been processed by Quickbooks, so I waited a couple days after the problem entry to resolve this error. That way I was guaranteed that the E-Payment was processed and sent to the government agency as expected. The last thing you want is for the E-payment to ‘actually’ be voided.

Semi-Useless Warning From Quickbooks

Semi-Useless Warning From Quickbooks

Step 4: The liability will reappear in your Pay Liabilities window.

The voided E-Payment reappears in the liabilities window

The voided E-Payment reappears in the liabilities window

Step 5: Double click the payment you just voided to open the Payment window. In my example, it says ‘overdue’, but that because I did it 5 days after submission and Quickbooks thinks it wasn’t ever made. Because the payment ~DID~ go through my bank account, I know that it  isn’t really past due.  The Payment History shows the “To Print” flag that we’re trying to get rid of due to the bug in Quickbooks software.

Once the payment window opens, you’ll see the fucked up default settings that are causing the problem. Notice how “E-Payment” is selected and “To Be Printed” is grayed out and ‘selected’.

Fucked up Default settings courtesy of Intuit's poor quality control

Fucked up Default settings courtesy of Intuit’s poor quality control

Step 6: Change the radio button from “E-Payment” to “Check”. This will allow the “To Be Printed” box to become active. Unselect the box. This will cause a check number to appear in the “No.” field at the top where it currently say “To Print” (top right corner.) Remove the check number it fills in and type in “E-pay” to indicate that it was in fact already e-paid.

I also like to change the date back to the same day I actually made the E-payment so that the voided transaction as well as this replacement one show up in the same place in the register.

Once you’ve made the changes “Save and Close” the payment. If you open the register that you just voided the payment out of, you should see this new replacement payment.

Change settings to "Check" and deselect "To Be Printed"

Change settings to “Check” and deselect “To Be Printed”

Step 7: I like to add memos to the voided payment and to the new payment to indicate why there are two transactions and why it looks like an E-Payment was voided. Will help anybody else who has to review the notes at some future date.

Append memo to the replacement 'e-pay' entry.

Append memo to the replacement ‘e-pay’ entry.

Add memo to original e-payment that we voided out at the beginning.

Add memo to original e-payment that we voided out at the beginning.

Step 9: In the Pay Liabilities window we no longer have the annoying and confusing warning that we need to print a liability check that was already e-paid.

No more warning!!

No more warning!!

Step 10: Visit Xero and seriously consider switching from Quickbooks.  As soon as Xero figures out how to import historical Quickbooks data, I’m gone !!

Arduino Code for determining QPPS setting for RoboClaw Motor Controller

If you use the RoboClaw Motor Controller, at some point you’ll need to determine the QPPS if you’re using wheel encoders. QPPS stands for “quadrature pulses per second” and is used in the motor controller for establish the maximum speed the motor can be driven at and is also used in the calculation of all the speed, distance, and position commands that are part of the Arduino Library.

While you can mathematically calculate it based on your particular motors and encoders, the ideal result will probably not match the actual results. Thus, we need to write some code that will run the motors at full speed and display the experimental results.

For example, I’m using Pololu 6V 75:1 ratio motors with 48 cpr encoders. This motor is specced at 130 RPM with a 75:1 reduction and the encoders count 48 clicks per revolution. Thus (130 rpm * 75 * 48)/(60 seconds per minute) = 7800 qpps in theory.

I’ve found however when I drive the motor controller a full speed, I  don’t get a full 6V into each motor, I’m getting about 5.82v which means my qpps is not going to match the theoretical.

So, basically, we want to run the motors at ‘full speed’ using the ForwardM1 and ForwardM2 motor commands and then read back the Speed using the ReadSpeedM1 and ReadSpeedM2 commands. These will report back the speed in QPPS.

This code uses a single pole filter to essentially average out the results. Because of this, you need to let the motors run a bit to get to a converged value. At some point, the speed will stop going up and will then fluctuate around a value, going up and down slightly. I interpret this as my maximum experimental QPPS to use with the RoboClaw motor controller.


#include "BMSerial.h"
#include "RoboClaw.h"

// Roboclaw is set to Serial Packet Mode
#define address 0x80

BMSerial terminal(0,1);      // this is usb cable from Arduino to computer
RoboClaw roboclaw(11,10);    // serial connection to RoboClaw
long avgSpeedM1, avgSpeedM2;
// alpha is used to filter the results
float alpha = .10; // .1 = data smoothing single pole filter setting.

void setup() {
    terminal.begin(9600);
    roboclaw.begin(38400);
}

void displayspeed(void) {
    uint8_t status;
    bool valid;

    long enc1= roboclaw.ReadEncM1(address, &status, &valid);
    if(valid){
        terminal.print("Encoder1:");
        terminal.print(enc1,DEC);
        terminal.print(" ");
    }
    long enc2 = roboclaw.ReadEncM2(address, &status, &valid);
    if(valid){
        terminal.print("Encoder2:");
        terminal.print(enc2,DEC);
        terminal.print(" ");
    }
    long speed1 = roboclaw.ReadSpeedM1(address, &status, &valid);
    // filter the speed. You'll need to run the motors for a bit
    // in order to get the filtered values to 'settle down'
    // after about 20 seconds of my motors at full speed I got
    // converged results.
    avgSpeedM1 = avgSpeedM1 * (1-alpha) + speed1 * alpha;

    if(valid){
        terminal.print("Avg Speed1:");
        terminal.print(avgSpeedM1,DEC);
        terminal.print(" ");
    }

    long speed2 = roboclaw.ReadSpeedM2(address, &status, &valid);
    avgSpeedM2 = avgSpeedM2 * (1-alpha) + speed2 * alpha;

    if(valid){
        terminal.print("Avg Speed2:");
        terminal.print(avgSpeedM2,DEC);
        terminal.print(" ");
    }
    terminal.println();
}

void loop() {
    // run both motors at 'full speed'
    roboclaw.ForwardM1(address,127);
    roboclaw.ForwardM2(address,127);
    displayspeed();
}

Script for Deleting Multiple Backups from Time Machine

There are several different ways to delete individual backups from Time Machine, but most are rather tedious, involving selecting a particular backup set and then deleting it manually.

There’s the additional complication that if you backup over a network, you’re really backing up to a sparsebundle which can only ‘grow’ in size, but will not ‘shrink’ after you delete backups without intervening to do so.

So why is this even a problem? Well, if you’re a single user with a single backup drive, it probably isn’t a problem. However, if you’re like us and use a Mac OS X Server as the central repository for your Time Machine backups with multiple client machines, then you will eventually run into the situation where you can’t add more users because the volume is ‘full.’

For example, Bob, Sally, and Joe are all clients on a Mac OS X Server TimeCapsule. They go about their business and eventually have multiple backups spanning months or years. The TimeCapsule gets close to full and Time Machine does what it is supposed to do, which is prune each individual users backups as need be.

Now the problem comes when you hire Ann and add her new machine to the Server TimeCapsule.  Chances are, on the very first backup you’ll get a “Not Enough Room” to complete the backup error because Bob, Sally, and Joe’s backups are each individually using up most of the space. While individually, they trim their own personal backups as needed, there’s no mechanism to ‘release’ more space to the ‘group.’ It’s the digital equivalent of the Tragedy of the Commons. Unfortunately, the Mac OSX Server implementation of TimeCapsule isn’t smart enough to ‘broadcast’ to the existing users that they need to do extra pruning to make room for the new employee Ann.

Bummer.

It turns out in our circumstance, that the backups for the existing employees were going back 2+ years. We really don’t need to go back that far, so there were about 20+ backups on each individual’s machine that we could remove and I didn’t want to sit there and manually remove each of them from multiple different machines.

So I wrote the script at the bottom of this post. (In PHP because that’s the language I know best and use daily.)

Here are some notes:

  1. Because this runs from the command line in Terminal, we have to add the “#!/usr/bin/php” which would not be found in a normal PHP script. This is a default location for PHP. If you’ve changed something with your PHP install, you’ll need to modify this.
  2. The default timezone is required to prevent PHP from squawking. Any time zone should be fine as I’m only using date functions for validation.
  3. Only works on OS X 10.7 or higher.

Directions for usage.

  1. Turn Off Time Machine on the client you’re working on temporarily.
  2. Download script and save it as ‘time_machine_prune.php’ to the desktop of each client machine.
  3. Open Terminal and navigate to the Desktop. (If you don’t know how to use Terminal or CLI, this probably isn’t for you. Info on Terminal.
  4. Change the permissions to make the script executable. “chmod 751 time_machine_prune.php”
  5. You need to run the script as a privileged user.
  6. sudo ./time_machine_prune.php

After you authenticate with your administrative password, the script will retrieve your oldest and newest backup sets to give you an estimate of your range:

Oldest and Newest Backup

Oldest and Newest Backup

Here we see that the oldest backup is from April 2012 and the newest backup is from April 2013.

Next enter a date before which you want all backups pruned. In this example, I entered 2013-05-11. The script will then show all backups that will be removed based on this date. In this case, there are two backups that would be affected.

Date before which to prune backups.

Date before which to prune backups.

CAREFULLY REVIEW the list before you enter ‘yes’. If you proceed, these backups will be permanently removed and there’s no way to undo it if you make a mistake. If you do not want to proceed, enter ‘no’ or anything other than ‘yes’.

Assuming you elect to proceed, it will then start pruning the backups one after the other starting with the oldest ones first. This will take some time (many minutes or hours depending on how large your list is.)

Pruning of the Time Machine backups proceeding.

Pruning of the Time Machine backups proceeding.

After a while, you should get the following screen indicating the number of backups pruned from the list. In this example, two were selected based on the date and two were removed.

Two Backups Killed.

Two Backups Killed.

If you ran this across the network, you now also need to compress the sparse image. See this post on Compacting Sparse Image Files. I ran it from the Server that held the images. It may also work from the client machine, but I didn’t try that.

Essentially you need to navigate to your Timecapusle and then into Shared Item->Backups. From there run the command:

sudo hdiutil compact /Volumes/Timecapsule/Shared\ Items/Backups/the_machine_just_pruned.sparsebundle 

The Script

#!/usr/bin/php
<?php
date_default_timezone_set ( 'America/Los_Angeles' );

$all_backups 	= array();
// get the name of the computer
exec('/usr/sbin/scutil --get ComputerName',$computer_name);
// get a list of all the backups for that computer
exec("/usr/bin/tmutil listbackups | /usr/bin/grep \"".$computer_name[0]."\"",$all_backups);

echo "Oldest Backup: " . $all_backups[0] . "\n";
echo "Newest Backup: " . $all_backups[count($all_backups)-1] . "\n";

echo "Enter Date before which to prune archive (YYYY-MM-DD format):";
$handle = fopen ("php://stdin","r");
$line 	= fgets($handle);

$date_format = 'Y-m-d';

$input			= trim($line);
$prune_time 	= strtotime($input);
$is_valid 		= date($date_format, $prune_time) == $input;

if ($is_valid) {
	// The user entered a valid date, procced.
	foreach ($all_backups as $single_backup) {
		$path_parts = pathinfo(trim($single_backup));
		preg_match( "/^([0-9]{4}-[0-9]{2}-[0-9]{2})(-.*)/",$path_parts['basename'],$matches);
		if(!$matches[1]) {
			// found a backup with a non-conforming name: ABORT!
			// this script is not robust enought to deal with non-conforming backup names
			echo "Error in matching backups to regex\n";
			exit;
		}
		$time = strtotime($matches[1]);
		// build a key/value list based on time of the backup.
		$time_list[$time] = $single_backup;
		unset($matches);
	}
	$count_prune = 0;
	$prune_list  = array();
	echo "\nThe following backups will be pruned from TimeMachine:\n";
	foreach ($time_list as $bu_time=>$bu_name) {
		// walk thru the list and compare the prune date (expressed as time) to the
		// time_list. Anything less than the user entered value gets added to the
		// prune_list array
		if ($bu_time < $prune_time) {
			echo "  $bu_name\n";
			$count_prune ++;
			$prune_list[] = $bu_name;
		}
	}
	echo "\nTotal Backups to prune: $count_prune\n";
	echo "***********************************************************************************\n";
	echo "*** CAREFULLY REVIEW above list. All listed backups will be deleted permanantly ***\n";
	echo "*** Enter 'yes' to proceed: ";
	$handle = fopen ("php://stdin","r");
	$line 	= fgets($handle);
	$input	= trim($line);
	if ($input == 'yes') {
		// user has elected to proceed with the prunning of the backup.
		echo "Proceeding with pruning, this may take awhile...\n";
		$kill_count = 0;
		foreach ($prune_list as $backup_to_kill) {
			// for each entry in the prune_list, use tmutil to delete that backup
			exec("/usr/bin/sudo /usr/bin/tmutil delete \"$backup_to_kill\"",$result);
			echo "  killed: $backup_to_kill\n";
			$kill_count ++;
		}
		echo "\nSuccessfully pruned $kill_count backups\n";
	} else {
		// user entered something other than 'yes' on the command line.
		echo "Pruning Canceled!! \n";
	}
} else {
	// user entered an invalid date.
	echo "$line is an invalid date. It must be entered in YYYY-MM-DD format\n";
	exit;
}
echo "\n\n";
?>

Getting Peplink Balance 20 to connect outside SIP extensions to Asterisk server

Okay, we used to have a Peplink Balance 200, but recently moved to a Peplink Balance 20 for greater throughput. We didn’t need the features of the 210, so we opted for the much cheaper Balance 20. (The 210 is about $1000 more than the 20.)

Struggled for a while to get remote extensions (phones located on the internet outside of the LAN) to connect to the server. On the Balance 200, it just pretty much worked once the appropriate ports were forwarded to the server IP on the LAN.

On the Peplink Balance 20 though, I was only able to get it to work after I set the default connection mode to be ‘Persistence’. Normally it comes with “Lowest Latency”. In our circumstance, the WAN address the call comes in on is not necessarily our lowest latency line.

To make the adjustment log into your Peplink Balance, then navigate to Outbound Policy. The ‘Default’ setting is the one right above ‘Add Rule’.

Peplink Balance 20 outbound policy

Peplink Balance 20 outbound policy

Click on ‘Default’ to open the settings and then use Persistence and “By Source”. Apparently “By Source is the most compatible setting and I had to use this rather than just “By Destination”

Revised for SIP connections

Revised for SIP connections

Prior to making the changes, we could call out to the remote extension but the remote extension could not successfully initiate an inbound connection. My guess is that the remote SIP extension pings the WAN and is directed to the asterisk server on the LAN, but then when the asterisk server attempts to open a RTP UDP session with the phone, it fails to do so because the outbound route it was attempting to use was not the same as the inbound route. WAN1 has a lower latency, but also less bandwidth whereas WAN2 has a higher latency, but also higher bandwidth. Our remote phone extension is currently connecting in via WAN2, and with the default settings on the Balance 20, the SIP UDP connection is being sent back out WAN1 instead of where the phone expect to pick it up on WAN2. (my guess as to what’s happening based on the fact we could call the phone, but the phone couldn’t call us.)

I suspect I can probably add a specific outbound policy rather than mess with the default. Probably better in the long run too as it would narrow the scope of the persistence to just the appropriate protocols. I’ll look into that soon and update if I can narrow the scope a bit on the outbound policy and still maintain the connection.