RSS Atom Add a new post titled:
Encrypted connection between SIP phones using Asterisk

Here is the setup I put together to have two SIP phones connect together over an encrypted channel. Since the two phones do not support encryption, I used Asterisk to provide the encrypted channel over the Internet.

Installing Asterisk

First of all, each VoIP phone is in a different physical location and so I installed an Asterisk server in each house.

One of the server is a Debian stretch machine and the other runs Ubuntu bionic 18.04. Regardless, I used a fairly standard configuration and simply installed the asterisk package on both machines:

apt install asterisk

SIP phones

The two phones, both Snom 300, connect to their local asterisk server on its local IP address and use the same details as I have put in /etc/asterisk/sip.conf:

[1000]
type=friend
qualify=yes
secret=password1
encryption=no
context=internal
host=dynamic
nat=no
canreinvite=yes
mailbox=1000@internal
vmexten=707
dtmfmode=rfc2833
call-limit=2
disallow=all
allow=g722
allow=ulaw

Dialplan and voicemail

The extension number above (1000) maps to the following configuration blurb in /etc/asterisk/extensions.conf:

[home]
exten => 1000,1,Dial(SIP/1000,20)
exten => 1000,n,Goto(in1000-${DIALSTATUS},1)
exten => 1000,n,Hangup
exten => in1000-BUSY,1,Hangup(17)
exten => in1000-CONGESTION,1,Hangup(3)
exten => in1000-CHANUNAVAIL,1,VoiceMail(1000@mailboxes,su)
exten => in1000-CHANUNAVAIL,n,Hangup(3)
exten => in1000-NOANSWER,1,VoiceMail(1000@mailboxes,su)
exten => in1000-NOANSWER,n,Hangup(16)
exten => _in1000-.,1,Hangup(16)

the internal context maps to the following blurb in /etc/asterisk/extensions.conf:

[internal]
include => home
include => iax2users
exten => 707,1,VoiceMailMain(1000@mailboxes)

and 1000@mailboxes maps to the following entry in /etc/asterisk/voicemail.conf:

[mailboxes]
1000 => 1234,home,person@email.com

(with 1234 being the voicemail PIN).

Encrypted IAX links

In order to create a virtual link between the two servers using the IAX protocol, I created user credentials on each server in /etc/asterisk/iax.conf:

[iaxuser]
type=user
auth=md5
secret=password2
context=iax2users
allow=g722
allow=speex
encryption=aes128
trunk=no

then I created an entry for the other server in the same file:

[server2]
type=peer
host=server2.dyn.fmarier.org
auth=md5
secret=password2
username=iaxuser
allow=g722
allow=speex
encryption=yes
forceencrypt=yes
trunk=no
qualify=yes

The second machine contains the same configuration with the exception of the server name (server1 instead of server2) and hostname (server1.dyn.fmarier.org instead of server2.dyn.fmarier.org).

Speed dial for the other phone

Finally, to allow each phone to ring one another by dialing 2000, I put the following in /etc/asterisk/extensions.conf:

[iax2users]
include => home
exten => 2000,1,Set(CALLERID(all)=Francois Marier <2000>)
exten => 2000,2,Dial(IAX2/server1/1000)

and of course a similar blurb on the other machine:

[iax2users]
include => home
exten => 2000,1,Set(CALLERID(all)=Other Person <2000>)
exten => 2000,2,Dial(IAX2/server2/1000)

Firewall rules

Since we are using the IAX protocol instead of SIP, there is only one port to open in /etc/network/iptables.up.rules for the remote server:

# IAX2 protocol
-A INPUT -s x.x.x.x/y -p udp --dport 4569 -j ACCEPT

where x.x.x.x/y is the IP range allocated to the ISP that the other machine is behind.

If you want to restrict traffic on the local network as well, then these ports need to be open for the SIP phone to be able to connect to its local server:

# VoIP phones (internal)
-A INPUT -s 192.168.1.3/32 -p udp --dport 5060 -j ACCEPT
-A INPUT -s 192.168.1.3/32 -p udp --dport 10000:20000 -j ACCEPT

where 192.168.1.3 is the static IP address allocated to the SIP phone.

Fedora 29 LXC setup on Ubuntu Bionic 18.04

Similarly to what I wrote for Debian stretch and jessie, here is how I was able to create a Fedora 29 LXC container on an Ubuntu 18.04 (bionic) laptop.

Setting up LXC on Ubuntu

First of all, install lxc:

apt install lxc
echo "veth" >> /etc/modules
modprobe veth

turn on bridged networking by putting the following in /etc/sysctl.d/local.conf:

net.ipv4.ip_forward=1

and applying it using:

sysctl -p /etc/sysctl.d/local.conf

Then allow the right traffic in your firewall (/etc/network/iptables.up.rules in my case):

# LXC containers
-A FORWARD -d 10.0.3.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 10.0.3.0/24 -j ACCEPT
-A INPUT -d 224.0.0.251 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 239.255.255.250 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 10.0.3.255 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 10.0.3.1 -s 10.0.3.0/24 -j ACCEPT

and apply these changes:

iptables-apply

before restarting the lxc networking:

systemctl restart lxc-net.service

Create the container

Once that's in place, you can finally create the Fedora 29 container:

lxc-create -n fedora29 -t download -- -d fedora -r 29 -a amd64

Logging in as root

Start up the container and get a login console:

lxc-start -n fedora29 -F

In another terminal, set a password for the root user:

lxc-attach -n fedora29 passwd

You can now use this password to log into the console you started earlier.

Logging in as an unprivileged user via ssh

As root, install a few packages:

dnf install openssh-server vim sudo man

and then create an unprivileged user with sudo access:

adduser francois -G wheel
passwd francois

Now login as that user from the console and add an ssh public key:

mkdir .ssh
chmod 700 .ssh
echo "<your public key>" > .ssh/authorized_keys
chmod 644 .ssh/authorized_keys

You can now login via ssh. The IP address to use can be seen in the output of:

lxc-ls --fancy

Enabling all necessary locales

To ensure that you have all available locales and don't see ugly perl warnings such as:

perl: warning: Setting locale failed.
perl: warning: Falling back to the standard locale ("C").

install the appropriate language packs:

dnf install langpacks-en.noarch
dnf reinstall dnf
Erasing Persistent Storage Securely on Linux

Here are some notes on how to securely delete computer data in a way that makes it impractical for anybody to recover that data. This is an important thing to do before giving away (or throwing away) old disks.

Ideally though, it's better not to have to rely on secure erasure and start use full-disk encryption right from the start, for example, using LUKS. That way if the secure deletion fails for whatever reason, or can't be performed (e.g. the drive is dead), then it's not a big deal.

Rotating hard drives

With ATA or SCSI hard drives, DBAN seems to be the ideal solution.

  1. Burn it on CD,
  2. boot with it,
  3. and following the instructions.

Note that you should disconnect any drives you don't want to erase before booting with that CD.

This is probably the most trustworth method of wiping since it uses free and open source software to write to each sector of the drive several times. The methods that follow rely on proprietary software built into the firmware of the devices and so you have to trust that it is implemented properly and not backdoored.

ATA / SATA solid-state drives

Due to the nature of solid-state storage (i.e. the lifetime number of writes is limited), it's not a good idea to use DBAN for those. Instead, we must rely on the vendor's implementation of ATA Secure Erase.

First, set a password on the drive:

hdparm --user-master u --security-set-pass p /dev/sdX

and then issue a Secure Erase command:

hdparm --user-master u --security-erase-enhanced p /dev/sdX

NVMe solid-state drives

For SSDs using an NVMe connector, simply request a User Data Erase

nvme format -s1 /dev/nvme0n1
Restricting outgoing HTTP traffic in a web application using a squid proxy

I recently had to fix a Server-Side Request Forgery bug in Libravatar's OpenID support. In addition to enabling authentication on internal services whenever possible, I also forced all outgoing network requests from the Django web-application to go through a restrictive egress proxy.

OpenID logins are prone to SSRF

Server-Side Request Forgeries are vulnerabilities which allow attackers to issue arbitrary GET requests on the server side. Unlike a Cross-Site Request Forgery, SSRF requests do not include user credentials (e.g. cookies). On the other hand, since these requests are done by the server, they typically originate from inside the firewall.

This allows attackers to target internal resources and issue arbitrary GET requests to them. One could use this to leak information, especially when error reports include the request payload, tamper with the state of internal services or portscan an internal network.

OpenID 1.x logins are prone to these vulnerabilities because of the way they are initiated:

  1. Users visit a site's login page.
  2. They enter their OpenID URL in a text field.
  3. The server fetches the given URL to discover the OpenID endpoints.
  4. The server redirects the user to their OpenID provider to continue the rest of the login flow.

The third step is the potentially problematic one since it requires a server-side fetch.

Filtering URLs in the application is not enough

At first, I thought I would filter out undesirable URLs inside the application:

  • hostnames like localhost, 127.0.0.1 or ::1
  • non-HTTP schemes like file or gopher
  • non-standard ports like 5432 or 11211

However this filtering is going to be very easy to bypass:

  1. Add a hostname in your DNS zone which resolves to 127.0.0.1.
  2. Setup a redirect to a blacklisted URL such as file:///etc/passwd.

Applying the filter on the original URL is clearly not enough.

Install and configure a Squid proxy

In order to fully restrict outgoing OpenID requests from the web application, I used a Squid HTTP proxy.

First, install the package:

apt install squid3

and set the following in /etc/squid3/squid.conf:

acl to_localnet dst 0.0.0.1-0.255.255.255 # RFC 1122 "this" network (LAN)
acl to_localnet dst 10.0.0.0/8            # RFC 1918 local private network (LAN)
acl to_localnet dst 100.64.0.0/10         # RFC 6598 shared address space (CGN)
acl to_localnet dst 169.254.0.0/16        # RFC 3927 link-local (directly plugged) machines
acl to_localnet dst 172.16.0.0/12         # RFC 1918 local private network (LAN)
acl to_localnet dst 192.168.0.0/16        # RFC 1918 local private network (LAN)
acl to_localnet dst fc00::/7              # RFC 4193 local private network range
acl to_localnet dst fe80::/10             # RFC 4291 link-local (directly plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 443
acl CONNECT method CONNECT

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny manager
http_access deny to_localhost
http_access deny to_localnet
http_access allow localhost
http_access deny all

http_port 127.0.0.1:3128

Ideally, I would like to use a whitelist approach to restrict requests to a small set of valid URLs, but in the case of OpenID, the set of valid URLs is not fixed. Therefore the only workable approach is a blacklist. The above snippet whitelists port numbers (80 and 443) and blacklists requests to localhost (a built-in squid acl variable which resolves to 127.0.0.1 and ::1) as well as known local IP ranges.

Expose the proxy to Django in the WSGI configuration

In order to force all outgoing requests from Django to go through the proxy, I put the following in my WSGI application (/etc/libravatar/django.wsgi):

os.environ['ftp_proxy'] = "http://127.0.0.1:3128"
os.environ['http_proxy'] = "http://127.0.0.1:3128"
os.environ['https_proxy'] = "http://127.0.0.1:3128"

The whole thing seemed to work well in my limited testing. There is however a bug in urllib2 with proxying HTTPS URLs that include a port number, and there is an open issue in python-openid around proxies and OpenID.

Lean data in practice

Mozilla has been promoting the idea of lean data for a while. It's about recognizing both that data is valuable and that it is a dangerous thing to hold on to. Following these lean data principles forces you to clarify the questions you want to answer and think hard about the minimal set of information you need to answer these questions.

Out of these general principles came the Firefox data collection guidelines. These are the guidelines that every team must follow when they want to collect data about our users and that are enforced through the data stewardship program.

As one of the data steward for Firefox, I have reviewed hundreds of data collection requests and can attest to the fact that Mozilla does follow the lean data principles it promotes. Mozillians are already aware of the problems with collecting large amounts of data, but the Firefox data review process provides an additional opportunity for an outsider to question the necessity of each piece of data. In my experience, this system is quite effective at reducing the data footprint of Firefox.

What does lean data look like in practice? Here are a few examples of changes that were made to restrict the data collected by Firefox to what is truly needed:

  • Collecting a user's country is not particularly identifying in the case of large countries likes the USA, but it can be when it comes to very small island nations. How many Firefox users are there in Niue? Hard to know, but it's definitely less than the number of Firefox users in Germany. After I raised that issue, the team decided to put all of the small countries into a single "other" bucket.

  • Similarly, cities generally have enough users to be non-identifying. However, some municipalities are quite small and can lead to the same problems. There are lots of Firefox users in Portland, Oregon for example, but probably not that many in Portland, Arkansas or Portland, Pennsylvania. If you want to tell the Oregonian Portlanders apart, it might be sufficient to bucket Portland users into "Oregon" and "not Oregon", instead of recording both the city and the state.

  • When collecting window sizes and other pixel-based measurements, it's easier to collect the exact value. However, that exact value could be stable for a while and create a temporary fingerprint for a user. In most cases, teams wanting to collect this kind of data have agreed to round the value in order to increase the number of users in each "bucket" without affecting their ability to answer their underlying questions.

  • Firefox occasionally runs studies which involve collecting specific URLs that users have consented to share with us (e.g. "this site crashes my Firefox"). In most cases though, the full URL is not needed and so I have often been able to get teams to restrict the collection to the hostname, or to at least remove the query string, which could include username and passwords on badly-designed websites.

  • When making use of Google Analytics, it may not be necessary to collect everything it supports by default. For example, my suggestion to trim the referrers was implemented by one of the teams using Google Analytics since while it would have been an interesting data point, it wasn't necessary to answer the questions they had in mind.

Some of these might sound like small wins, but to me they are a sign that the process is working. In most cases, requests are very easy to approve because developers have already done the hard work of data minimization. In a few cases, by asking questions and getting familiar with the problem, the data steward can point out opportunities for further reductions in data collection that the team may have missed.

Installing Vidyo on Ubuntu 18.04

Following these instructions as well as the comments in there, I was able to get Vidyo, the proprietary videoconferencing system that Mozilla uses internally, to work on Ubuntu 18.04 (Bionic Beaver). The same instructions should work on recent versions of Debian too.

Installing dependencies

First of all, install all of the package dependencies:

sudo apt install libqt4-designer libqt4-opengl libqt4-svg libqtgui4 libqtwebkit4 sni-qt overlay-scrollbar-gtk2 libcanberra-gtk-module

Then, ensure you have a system tray application running. This should be the case for most desktop environments.

Building a custom Vidyo package

Download version 3.6.3 from the CERN Vidyo Portal but don't expect to be able to install it right away.

You need to first hack the package in order to remove obsolete dependencies.

Once that's done, install the resulting package:

sudo dpkg -i vidyodesktop-custom.deb

Packaging fixes and configuration

There are a few more things to fix before it's ready to be used.

First, fix the ownership on the main executable:

sudo chown root:root /usr/bin/VidyoDesktop

Then disable autostart since you don't probably don't want to keep the client running all of the time (and listening on the network) given it hasn't received any updates in a long time and has apparently been abandoned by Vidyo:

sudo rm /etc/xdg/autostart/VidyoDesktop.desktop

Remove any old configs in your home directory that could interfere with this version:

rm -rf ~/.vidyo ~/.config/Vidyo

Finally, launch VidyoDesktop and go into the settings to check "Always use VidyoProxy".

Mercurial commit series in Phabricator using Arcanist

Phabricator supports multi-commit patch series, but it's not yet obvious how to do it using Mercurial. So this the "hg" equivalent of this blog post for git users.

Note that other people have written tools and plugins to do the same thing and that an official client is coming soon.

Initial setup

I'm going to assume that you've setup arcanist and gotten an account on the Mozilla Phabricator instance. If you haven't, follow this video introduction or the excellent documentation for it (Bryce also wrote additionnal instructions for Windows users).

Make a list of commits to submit

First of all, use hg histedit to make a list of the commits that are needed:

pick ee4d9e9fcbad 477986 Bug 1461515 - Split tracking annotations from tracki...
pick 5509b5db01a4 477987 Bug 1461515 - Fix and expand tracking annotation tes...
pick e40312debf76 477988 Bug 1461515 - Make TP test fail if it uses the wrong...

Create Phabricator revisions

Now, create a Phabricator revision for each commit (in order, from earliest to latest):

~/devel/mozilla-unified (annotation-list-1461515)$ hg up ee4d9e9fcbad
5 files updated, 0 files merged, 0 files removed, 0 files unresolved
(leaving bookmark annotation-list-1461515)

~/devel/mozilla-unified (ee4d9e9)$ arc diff --no-amend
Linting...
No lint engine configured for this project.
Running unit tests...
No unit test engine is configured for this project.
 SKIP STAGING  Phabricator does not support staging areas for this repository.
Created a new Differential revision:
        Revision URI: https://phabricator.services.mozilla.com/D2484

Included changes:
  M       modules/libpref/init/all.js
  M       netwerk/base/nsChannelClassifier.cpp
  M       netwerk/base/nsChannelClassifier.h
  M       toolkit/components/url-classifier/Classifier.cpp
  M       toolkit/components/url-classifier/SafeBrowsing.jsm
  M       toolkit/components/url-classifier/nsUrlClassifierDBService.cpp
  M       toolkit/components/url-classifier/tests/UrlClassifierTestUtils.jsm
  M       toolkit/components/url-classifier/tests/mochitest/test_trackingprotection_bug1312515.html
  M       xpcom/base/ErrorList.py

~/devel/mozilla-unified (ee4d9e9)$ hg up 5509b5db01a4
3 files updated, 0 files merged, 0 files removed, 0 files unresolved

~/devel/mozilla-unified (5509b5d)$ arc diff --no-amend
Linting...
No lint engine configured for this project.
Running unit tests...
No unit test engine is configured for this project.
 SKIP STAGING  Phabricator does not support staging areas for this repository.
Created a new Differential revision:
        Revision URI: https://phabricator.services.mozilla.com/D2485

Included changes:
  M       toolkit/components/url-classifier/tests/UrlClassifierTestUtils.jsm
  M       toolkit/components/url-classifier/tests/mochitest/test_trackingprotection_bug1312515.html
  M       toolkit/components/url-classifier/tests/mochitest/trackingRequest.html

~/devel/mozilla-unified (5509b5d)$ hg up e40312debf76
2 files updated, 0 files merged, 0 files removed, 0 files unresolved

~/devel/mozilla-unified (e40312d)$ arc diff --no-amend
Linting...
No lint engine configured for this project.
Running unit tests...
No unit test engine is configured for this project.
 SKIP STAGING  Phabricator does not support staging areas for this repository.
Created a new Differential revision:
        Revision URI: https://phabricator.services.mozilla.com/D2486

Included changes:
  M       toolkit/components/url-classifier/tests/mochitest/classifiedAnnotatedPBFrame.html
  M       toolkit/components/url-classifier/tests/mochitest/test_privatebrowsing_trackingprotection.html

Link all revisions together

In order to ensure that these commits depend on one another, click on that last phabricator.services.mozilla.com link, then click "Related Revisions" then "Edit Parent Revisions" in the right-hand side bar and then add the previous commit (D2485 in this example).

Then go to that parent revision and repeat the same steps to set D2484 as its parent.

Amend one of the commits

As it turns out my first patch wasn't perfect and I needed to amend the middle commit to fix some test failures that came up after pushing to Try. I ended up with the following commits (as viewed in hg histedit):

pick ee4d9e9fcbad 477986 Bug 1461515 - Split tracking annotations from tracki...
pick c24f4d9e75b9 477992 Bug 1461515 - Fix and expand tracking annotation tes...
pick 1840f68978a7 477993 Bug 1461515 - Make TP test fail if it uses the wrong...

which highlights that the last two commits changed and that I would have two revisions (D2485 and D2486) to update in Phabricator.

However, since the only reason why the third patch has a different commit hash is because its parent changed, theres's no need to upload it again to Phabricator. Lando doesn't care about the parent hash and relies instead on the parent revision ID. It essentially applies diffs one at a time.

The trick was to pass the --update DXXXX argument to arc diff:

~/devel/mozilla-unified (annotation-list-1461515)$ hg up c24f4d9e75b9
2 files updated, 0 files merged, 0 files removed, 0 files unresolved
(leaving bookmark annotation-list-1461515)

~/devel/mozilla-unified (c24f4d9)$ arc diff --no-amend --update D2485
Linting...
No lint engine configured for this project.
Running unit tests...
No unit test engine is configured for this project.
 SKIP STAGING  Phabricator does not support staging areas for this repository.
Updated an existing Differential revision:
        Revision URI: https://phabricator.services.mozilla.com/D2485

Included changes:
  M       browser/base/content/test/general/trackingPage.html
  M       netwerk/test/unit/test_trackingProtection_annotateChannels.js
  M       toolkit/components/antitracking/test/browser/browser_imageCache.js
  M       toolkit/components/antitracking/test/browser/browser_subResources.js
  M       toolkit/components/antitracking/test/browser/head.js
  M       toolkit/components/antitracking/test/browser/popup.html
  M       toolkit/components/antitracking/test/browser/tracker.js
  M       toolkit/components/url-classifier/tests/UrlClassifierTestUtils.jsm
  M       toolkit/components/url-classifier/tests/mochitest/test_trackingprotection_bug1312515.html
  M       toolkit/components/url-classifier/tests/mochitest/trackingRequest.html

Note that changing the commit message will not automatically update the revision details in Phabricator. This has to be done manually in the Web UI if required.

Recovering from a botched hg histedit on a mercurial bookmark

If you are in the middle of a failed Mercurial hg histedit, you can normally do hg histedit --abort to cancel it, though sometimes you also have to reach out for hg update -C. This is the equivalent of git's git rebase --abort and it does what you'd expect.

However, if you go ahead and finish the history rewriting and only notice problems later, it's not as straighforward. With git, I'd look into the reflog (git reflog) for the previous value of the branch pointer and simply git reset --hard to that value. Done.

Based on a Stack Overflow answer, I thought I could undo my botched histedit using:

hg unbundle ~/devel/mozilla-unified/.hg/strip-backup/47906774d58d-ae1953e1-backup.hg

but it didn't seem to work. Maybe it doesn't work when using bookmarks.

Here's what I ended up doing to fully revert my botched Mercurial histedit. If you know of a simpler way to do this, feel free to leave a comment.

Collecting the commits to restore

The first step was to collect all of the commits hashes I needed to restore. Luckily, I had sumitted my patch to Try before changing it and so I was able to look at the pushlog to get all of the commits at once.

If I didn't have that, I could also go to the last bookmark I pushed and click on parent commits until I hit the first one that's not mine. Then I could collect all of the commits using the browser's back button:

For that last one, I had to click on the changeset commit hash link in order to get the commit hash instead of the name of the bookmark (/rev/hashstore-crash-1434206).

Recreating the branch from scratch

This is what did to export patches for each commit and then re-import them one after the other:

for c in 3c31c543e736 7ddfe5ae2fa6 c04b620136c7 2d1bf04fd155 e194843f5b7a 47906774d58d f6a657bca64f 0d7a4e1c0079 976e25b49758 a1a382f2e773 b1565f3aacdb 3fdd157bb698 b1b041990577 220bf5cd9e2a c927a5205abe ; do hg export $c > ~/$c.patch ; done
hg up ff8505d177b9
hg bookmarks hashstore-crash-1434206-new
for c in 3c31c543e736 7ddfe5ae2fa6 c04b620136c7 2d1bf04fd155 e194843f5b7a 47906774d58d f6a657bca64f 0d7a4e1c0079 976e25b49758 a1a382f2e773 b1565f3aacdb 3fdd157bb698 b1b041990577 220bf5cd9e2a c927a5205abe 4140cd9c67b0 ; do hg import ~/$c.patch ; done

Copying a bookmark

As an aside, if you want to make a copy of a bookmark before you do an hg histedit, it's not as simple as:

hg up hashstore-crash-1434206
hg bookmarks hashstore-crash-1434206-copy
hg up hashstore-crash-1434206

While that seemed to work at the time, the histedit ended up messing with both of them.

An alternative that works is to push the bookmark to another machine. That way if worse comes to worse, you can hg clone from there and hg export the commits you want to re-import using hg import.

Mysterious 'everybody is busy/congested at this time' error in Asterisk

I was trying to figure out why I was getting a BUSY signal from Asterisk while trying to ring a SIP phone even though that phone was not in use.

My asterisk setup looks like this:

phone 1 <--SIP--> asterisk 1 <==IAX2==> asterisk 2 <--SIP--> phone 2

While I couldn't call SIP phone #2 from SIP phone #1, the reverse was working fine (ringing #1 from #2). So it's not a network/firewall problem. The two SIP phones can talk to one another through their respective Asterisk servers.

This is the error message I could see on the second asterisk server:

$ asterisk -r
...
  == Using SIP RTP TOS bits 184
  == Using SIP RTP CoS mark 5
    -- Called SIP/12345
    -- SIP/12345-00000002 redirecting info has changed, passing it to IAX2/iaxuser-6347
    -- SIP/12345-00000002 is busy
  == Everyone is busy/congested at this time (1:1/0/0)
    -- Executing [12345@local:2] Goto("IAX2/iaxuser-6347", "in12345-BUSY,1") in new stack
    -- Goto (local,in12345-BUSY,1)
    -- Executing [in12345-BUSY@local:1] Hangup("IAX2/iaxuser-6347", "17") in new stack
  == Spawn extension (local, in12345-BUSY, 1) exited non-zero on 'IAX2/iaxuser-6347'
    -- Hungup 'IAX2/iaxuser-6347'

where:

  • 12345 is the extension of SIP phone #2 on Asterisk server #2
  • iaxuser is the user account on server #2 that server #1 uses
  • local is the context that for incoming IAX calls on server #1

This Everyone is busy/congested at this time (1:1/0/0) was surprising since looking at each SIP channel on that server showed nobody as busy:

asterisk2*CLI> sip show inuse
* Peer name               In use          Limit           
12345                     0/0/0           2               

So I enabled the raw SIP debug output and got the following (edited for clarity):

asterisk2*CLI> sip set debug on
SIP Debugging enabled

  == Using SIP RTP TOS bits 184
  == Using SIP RTP CoS mark 5

INVITE sip:12345@192.168.0.4:2048;line=m2vlbuoc SIP/2.0
Via: SIP/2.0/UDP 192.168.0.2:5060
From: "Francois Marier" <sip:67890@192.168.0.2>
To: <sip:12345@192.168.0.4:2048;line=m2vlbuoc>
CSeq: 102 INVITE
User-Agent: Asterisk PBX
Contact: <sip:67890@192.168.0.2:5060>
Content-Length: 274

    -- Called SIP/12345

<--- SIP read from UDP:192.168.0.4:2048 --->
SIP/2.0 100 Trying
Via: SIP/2.0/UDP 192.168.0.2:5060
From: "Francois Marier" <sip:67890@192.168.0.2>
To: <sip:12345@192.168.0.4:2048;line=m2vlbuoc>
CSeq: 102 INVITE
User-Agent: snom300
Contact: <sip:12345@192.168.0.4:2048;line=m2vlbuoc>
Content-Length: 0

<------------->
--- (9 headers 0 lines) ---

<--- SIP read from UDP:192.168.0.4:2048 --->
SIP/2.0 480 Do Not Disturb
Via: SIP/2.0/UDP 192.168.0.2:5060
From: "Francois Marier" <sip:67890@192.168.0.2>
To: <sip:12345@192.168.0.4:2048;line=m2vlbuoc>
CSeq: 102 INVITE
User-Agent: snom300
Contact: <sip:12345@192.168.0.4:2048;line=m2vlbuoc>
Content-Length: 0

where:

  • 12345 is the extension of SIP phone #2 on Asterisk server #2
  • 67890 is the extension of SIP phone #1 on Asterisk server #2
  • 192.168.0.4 is the IP address of SIP phone #2
  • 192.168.0.1 is the IP address of Asterisk server #2

From there, I can see that SIP phone #2 is returning a status of 408 Do Not Disturb. That's what the problem was: the phone itself was in DnD mode and set to reject all incoming calls.

Running mythtv-setup over ssh

In order to configure a remote MythTV server, I had to run mythtv-setup remotely over an ssh connection with X forwarding:

ssh -X mythtv@machine

For most config options, I can either use the configuration menus inside of of mythfrontend (over a vnc connection) or the Settings section of MythWeb, but some of the backend and tuner settings are only available through the main setup program.

Unfortunately, mythtv-setup won't work over an ssh connection by default and prints the following error in the terminal:

$ mythtv-setup
...
W  OpenGL: Could not determine whether Sync to VBlank is enabled.
Handling Segmentation fault
Segmentation fault (core dumped)

The fix for this was to specify a different theme engine:

mythtv-setup -O ThemePainter=qt