pages tagged libravatarFeeding the Cloudhttps://feeding.cloud.geek.nz/tags/libravatar/Feeding the Cloudikiwiki2021-06-11T20:43:57ZRestricting outgoing HTTP traffic in a web application using a squid proxyhttps://feeding.cloud.geek.nz/posts/restricting-outgoing-webapp-requests-using-squid-proxy/
<a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>
2021-06-11T20:43:57Z2018-12-27T23:00:00Z
<p>I recently had to fix a <a href="https://bugs.launchpad.net/libravatar/+bug/1808720">Server-Side Request Forgery
bug</a> in Libravatar's
<a href="https://en.wikipedia.org/wiki/OpenID">OpenID</a> support. In addition to
<strong>enabling authentication on internal services</strong> whenever possible, I also
forced all outgoing network requests from the Django web-application to go
through a restrictive egress proxy.</p>
<h1 id="OpenID_logins_are_prone_to_SSRF">OpenID logins are prone to SSRF</h1>
<p><a href="https://www.acunetix.com/blog/articles/server-side-request-forgery-vulnerability/">Server-Side Request
Forgeries</a>
are vulnerabilities which allow attackers to issue arbitrary <a href="https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol#Request_methods">GET
requests</a>
on the server side. Unlike a <a href="https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF%29">Cross-Site Request
Forgery</a>,
SSRF requests do not include user credentials (e.g. cookies). On the other
hand, since these requests are done by the server, they typically originate
from inside the firewall.</p>
<p>This allows attackers to target internal resources and issue arbitrary GET
requests to them. One could use this to leak information, especially when
error reports include the request payload, tamper with the state of internal
services or portscan an internal network.</p>
<p>OpenID 1.x logins are prone to these vulnerabilities because of the way they
are initiated:</p>
<ol>
<li>Users visit a site's login page.</li>
<li>They enter their OpenID URL in a text field.</li>
<li>The server fetches the given URL to discover the OpenID endpoints.</li>
<li>The server redirects the user to their OpenID provider to continue the
rest of the login flow.</li>
</ol>
<p>The third step is the potentially problematic one since it requires a
server-side fetch.</p>
<h1 id="Filtering_URLs_in_the_application_is_not_enough">Filtering URLs in the application is not enough</h1>
<p>At first, I thought I would filter out undesirable URLs inside the
application:</p>
<ul>
<li>hostnames like <code>localhost</code>, <code>127.0.0.1</code> or <code>::1</code></li>
<li>non-HTTP schemes like <code>file</code> or <code>gopher</code></li>
<li>non-standard ports like <code>5432</code> or <code>11211</code></li>
</ul>
<p>However this filtering is going to be very easy to bypass:</p>
<ol>
<li>Add a hostname in your DNS zone which resolves to <code>127.0.0.1</code>.</li>
<li>Setup a redirect to a blacklisted URL such as <code>file:///etc/passwd</code>.</li>
</ol>
<p>Applying the filter on the original URL is clearly not enough.</p>
<h1 id="Install_and_configure_a_Squid_proxy">Install and configure a Squid proxy</h1>
<p>In order to fully restrict outgoing OpenID requests from the web
application, I used a <a href="http://www.squid-cache.org">Squid</a> HTTP proxy.</p>
<p>First, install the package:</p>
<pre><code>apt install squid3
</code></pre>
<p>and set the following in <code>/etc/squid3/squid.conf</code>:</p>
<pre><code>acl to_localnet dst 0.0.0.1-0.255.255.255 # RFC 1122 "this" network (LAN)
acl to_localnet dst 10.0.0.0/8 # RFC 1918 local private network (LAN)
acl to_localnet dst 100.64.0.0/10 # RFC 6598 shared address space (CGN)
acl to_localnet dst 169.254.0.0/16 # RFC 3927 link-local (directly plugged) machines
acl to_localnet dst 172.16.0.0/12 # RFC 1918 local private network (LAN)
acl to_localnet dst 192.168.0.0/16 # RFC 1918 local private network (LAN)
acl to_localnet dst fc00::/7 # RFC 4193 local private network range
acl to_localnet dst fe80::/10 # RFC 4291 link-local (directly plugged) machines
acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 443
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny manager
http_access deny to_localhost
http_access deny to_localnet
http_access allow localhost
http_access deny all
http_port 127.0.0.1:3128
</code></pre>
<p>Ideally, I would like to use a whitelist approach to restrict requests to a
small set of valid URLs, but in the case of OpenID, the set of valid URLs is
not fixed. Therefore the only workable approach is a blacklist. The above
snippet whitelists port numbers (<code>80</code> and <code>443</code>) and blacklists requests to
<code>localhost</code> (a built-in squid
<a href="http://www.squid-cache.org/Doc/config/acl/">acl</a> variable which resolves to
<code>127.0.0.1</code> and <code>::1</code>) as well as known local IP ranges.</p>
<h1 id="Expose_the_proxy_to_Django_in_the_WSGI_configuration">Expose the proxy to Django in the WSGI configuration</h1>
<p>In order to force all outgoing requests from Django to <a href="https://stackoverflow.com/questions/14284824/working-with-django-proxy-setup">go through the
proxy</a>,
I put the following in my <a href="http://wsgi.org/">WSGI</a> application
(<code>/etc/libravatar/django.wsgi</code>):</p>
<pre><code>os.environ['ftp_proxy'] = "http://127.0.0.1:3128"
os.environ['http_proxy'] = "http://127.0.0.1:3128"
os.environ['https_proxy'] = "http://127.0.0.1:3128"
</code></pre>
<p>The whole thing seemed to work well in my limited testing. There is however
<a href="https://bugs.python.org/issue24311">a bug in urllib2</a> with proxying HTTPS
URLs that include a port number, and there is <a href="https://github.com/openid/python-openid/issues/83">an open issue in
python-openid</a> around
proxies and OpenID.</p>
Looking back on starting Libravatarhttps://feeding.cloud.geek.nz/posts/looking-back-on-starting-libravatar/
<a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>
2021-06-11T20:43:57Z2018-04-03T01:00:00Z
<p><strong>Update (2018-07-31): <a href="https://blog.libravatar.org/posts/Libravatar.org_is_not_going_away/">Libravatar is not going away</a></strong></p>
<p>As noted on the <a href="https://blog.libravatar.org/posts/Libravatar.org_is_shutting_down_on_2018-09-01/">official Libravatar
blog</a>,
I will be shutting the service down on 2018-09-01.</p>
<p>It has been an <a href="https://ourincrediblejourney.tumblr.com/">incredible
journey</a> but Libravatar has been
more-or-less in maintenance mode for 5 years, so it's somewhat outdated in
its technological stack and I no longer have much interest in doing the work
that's required every two years when migrating to a new version of
Debian/Django. The free software community prides itself on transparency and
so while it is a <a href="https://blog.liw.fi/posts/2017/08/13/retiring_obnam/">difficult decision to
make</a>, it's time to
be upfront with the users who depend on the project and admit that the
project is not sustainable in its current form.</p>
<h1 id="Many_things_worked_well">Many things worked well</h1>
<p>The most motivating aspect of running Libravatar has been the steady organic
growth within the FOSS community. Both in terms of traffic (in March 2018,
we served a total of 5 GB of images and 12 GB of <code>302</code> redirects to
Gravatar), integration with other sites and projects (Fedora, Debian,
Mozilla, Linux kernel, Gitlab, Liberapay and many others), but also in terms
of users:</p>
<p><img alt="" src="https://feeding.cloud.geek.nz/posts/looking-back-on-starting-libravatar/cumulative_user_accounts.png" /></p>
<p>In addition, I wanted to validate that it is possible to run a FOSS service
without having to pay for anything out-of-pocket, so that it would be
financially sustainable. Hosting and domain registrations have been entirely
funded by the community, thanks to the generosity of sponsors and donors.
Most of the donations came through <a href="https://gratipay.com/">Gittip/Gratipay</a>
and <a href="https://liberapay.com/">Liberapay</a>. While Gratipay has now <a href="https://gratipay.news/the-end-cbfba8f50981">shut
down</a>, I encourage you to
<a href="https://liberapay.com/Liberapay/donate">support Liberapay</a>.</p>
<p>Finally, I made an effort to host Libravatar on FOSS infrastructure. That
meant shying away from popular proprietary services in order to make a point
that these convenient and well-known services aren't actually needed to run
a successful project.</p>
<h1 id="A_few_things_didn.26.2339.3Bt_pan_out">A few things didn't pan out</h1>
<p>On the other hand, there were also a few disappointments.</p>
<p>A lot of the <a href="https://wiki.libravatar.org/libraries/">libraries and plugins</a>
never implemented <a href="https://wiki.libravatar.org/api/">DNS federation</a>. That
was the key part of the protocol that made Libravatar a decentralized
service but unfortunately the rest of the protocol was must easier to
implement and therefore many clients stopped there.</p>
<p>In addition, it turns out that while the DNS system is essentially a
federated caching system for IP addresses, many DNS resolvers aren't doing a
good job caching records and that created unnecessary latency for clients
that chose to support DNS federation.</p>
<p>The main disappointment was that very few people stepped up to run mirrors.
I designed the service so that it could scale easily in the same way that
Linux distributions have coped with increasing user bases: "ftp" mirrors. By
making the actual serving of images only require Apache and <code>mod_rewrite</code>, I
had hoped that anybody running Apache would be able to add an extra vhost to
their setup and start serving our static files. A few people did sign up for
this over the years, but it mostly didn't work. Right now, there are no
third-party mirrors online.</p>
<p>The other aspect that was a little disappointing was the lack of code
contributions. There were a handful from friends in the first couple of
months, but it's otherwise been a one-man project. I suppose that when a
service works well for what people use it for, there are less opportunities
for contributions (or less desire for it). The fact <a href="https://wiki.libravatar.org/development_environment/">dev environment
setup</a> was not the
easiest could definitely be a contributing factor, but I've only ever had a
single person ask about it so it's not clear that this was the limiting
factor. Also, while our source code repository was hosted on Github and open
for pull requests, we never even received a single drive-by contribution,
hinting at the fact that Github is not the magic bullet for community
contributions that many people think it is.</p>
<p>Finally, it turns out that it is harder to delegate sysadmin work (you need
root, for one thing) which consumes the majority of the time in a mature
project. The general administration and maintenance of Libravatar has never
moved on beyond its core team of one. I don't have a lot of ideas here, but
I do want to join
<a href="http://scanlime.org/2011/05/cia-vc-service-is-down-indefinitely/">others</a>
who have flagged this as an area for "future work" in terms of project
sustainability.</p>
<h1 id="Personal_goals">Personal goals</h1>
<p>While I was originally inspired by <a href="http://static.fsf.org/nosvn/Evan_Prodromou_-_identi.ca_-_LibrePlanet_2009.spx">Evan Prodromou's
vision</a>
of a suite of FOSS services to replace the proprietary stack that everybody
relies on, starting a free software project is an inherently personal
endeavour: the shape of the project will be influenced by the personal goals
of the founder.</p>
<p>When I started the project in 2011, I had a few goals:</p>
<ul>
<li><p>I wanted to get experience with Python, Django, and Bazaar.</p></li>
<li><p>I wanted to speak at a <a href="https://python.nz/">Kiwi PyCon</a> which <a href="https://web.archive.org/web/20110808005944/http://nz.pycon.org/2010/talks/talk/72/">I
did</a>,
<a href="https://www.youtube.com/watch?v=wfDhGAMPS1g">twice</a>, but my Libravatar
experience also led to speak at
<a href="http://penta.debconf.org/dc10_schedule///////events/682.en.html">DebConf</a>
<a href="https://summit.debconf.org/debconf14/meeting/16/outsourcing-your-webapp-maintenance-to-debian/">twice</a>,
<a href="https://www.youtube.com/watch?v=ufkYjt9HV64">linux.conf.au</a> and
<a href="https://web.archive.org/web/20161005202936/http://conferences.oreilly.com/oscon/oscon2011/public/schedule/detail/18773">OSCON</a>.</p></li>
<li><p>Career-wise, I wanted to move beyond PHP development, which I successfully
achieved by working for a <a href="https://logger.paua.org.nz/">new client</a> while
I was at <a href="https://catalyst.net.nz">Catalyst</a> and then getting hired by
<a href="https://mozilla.org">Mozilla</a> to work on
<a href="https://en.wikipedia.org/wiki/Mozilla_Persona">Persona</a> until it was
de-staffed following a <a href="http://arewereorganizedyet.com/">Mozilla reorg</a>.</p></li>
</ul>
<p>This project personally taught me a lot of different technologies and
allowed me to try out various web development techniques I wanted to explore
at the time. That was intentional: I chose my technologies so that even if
the project was a complete failure, I would still have gotten something out
of it.</p>
<h1 id="A_few_things_I.26.2339.3Bve_learned">A few things I've learned</h1>
<p>I learned many things along the way, but here are a few that might be useful
to other people starting a new free software project:</p>
<ul>
<li><p>Speak about your new project at every user group you can. It's important
to validate that you can get other people excited about your project. User
groups are a great (and cheap) way to kickstart your word of mouth
marketing.</p></li>
<li><p>When speaking about your project, ask simple things of the attendees (e.g.
create an account today, join the IRC channel). Often people want to
support you but they can't commit to big tasks. Make sure to take
advantage of all of the support you can get, especially early on.</p></li>
<li><p>Having your friends join (or lurk on!) an IRC channel means it's vibrant,
instead of empty, and there are people around to field simple questions or
tell people to wait until you're around. Nobody wants to be alone in a
channel with a stranger.</p></li>
</ul>
<h1 id="Thank_you">Thank you</h1>
<p>I do want to sincerely thank all of the people who contributed to the
project over the years:</p>
<ul>
<li>Jonathan Harker and Brett Wilkins for productive hack sessions in the
Catalyst office.</li>
<li>Lars Wirzenius, Andy Chilton and Jesse Noller for graciously hosting the
service.</li>
<li>Christian Weiske, Melissa Draper, Thomas Goirand and Kai Hendry for
running mirrors on their servers.</li>
<li>Chris Forbes, fr33domlover, Kang-min Liu and strk for writing and
maintaining client libraries.</li>
<li>The Wellington Perl Mongers for their invaluable feedback on an early prototype.</li>
<li>The <code>#equifoss</code> group for their ongoing suppport and numerous ideas.</li>
<li>Nigel Babu and Melissa Draper for producing the first (and only) project
stickers, as well as Chris Cormack for spreading them so effectively.</li>
<li>Adolfo Jayme, Alfredo Hernández, Anthony Harrington, Asier Iturralde
Sarasola, Besnik, Beto1917, Daniel Neis, Eduardo Battaglia, Fernando P
Silveira, Gabriele Castagneti, Heimen Stoffels, Iñaki Arenaza, Jakob
Kramer, Jorge Luis Gomez, Kristina Hoeppner, Laura Arjona Reina, Léo
POUGHON, Marc Coll Carrillo, Mehmet Keçeci, Milan Horák, Mitsuhiro
Yoshida, Oleg Koptev, Rodrigo Díaz, Simone G, Stanislas Michalak, Volkan
Gezer, VPablo, Xuacu Saturio, Yuri Chornoivan, yurchor and zapman for
making Libravatar speak so many languages.</li>
</ul>
<p>I'm sure I have forgotten people who have helped over the years. If your
name belongs in here and it's not, please email me or leave a comment.</p>
Mysterious 400 Bad Request in Django debug modehttps://feeding.cloud.geek.nz/posts/mysterious-400-bad-request-error-django-debug/
<a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>
2021-06-11T20:43:57Z2017-06-11T00:20:00Z
<p>While upgrading <a href="https://www.libravatar.org">Libravatar</a> to a more recent
version of <a href="https://www.djangoproject.com/">Django</a>, I ran into a
mysterious 400 error.</p>
<p>In debug mode, my site was working fine, but with <code>DEBUG = False</code>, I would
only a page containing this error:</p>
<pre><code>Bad Request (400)
</code></pre>
<p>with no extra details in the web server logs.</p>
<h1 id="Turning_on_extra_error_logging">Turning on extra error logging</h1>
<p>To see the full error message, I <a href="https://docs.djangoproject.com/en/1.11/topics/logging/#examples">configured logging to a
file</a> by
adding this to <code>settings.py</code>:</p>
<pre><code>LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'file': {
'level': 'DEBUG',
'class': 'logging.FileHandler',
'filename': '/tmp/debug.log',
},
},
'loggers': {
'django': {
'handlers': ['file'],
'level': 'DEBUG',
'propagate': True,
},
},
}
</code></pre>
<p>Then I got the following error message:</p>
<pre><code>Invalid HTTP_HOST header: 'www.example.com'. You may need to add u'www.example.com' to ALLOWED_HOSTS.
</code></pre>
<h1 id="Temporary_hack">Temporary hack</h1>
<p>Sure enough, putting this in <code>settings.py</code> would make it work outside of debug mode:</p>
<pre><code>ALLOWED_HOSTS = ['*']
</code></pre>
<p>which means that there's a mismatch between the HTTP_HOST from Apache and
<a href="https://docs.djangoproject.com/en/1.11/topics/security/#host-headers-virtual-hosting">the one that Django expects</a>.</p>
<h1 id="Root_cause">Root cause</h1>
<p>The underlying problem was that the
<a href="https://git.launchpad.net/~libravatar/libravatar/commit/?id=a8c1002a39e7a1ef7d0ed7e5fb2ecf536ad4eede">Libravatar config file was missing the square brackets</a>
around the
<a href="https://docs.djangoproject.com/en/1.11/ref/settings/#allowed-hosts"><code>ALLOWED_HOSTS</code> setting</a>.</p>
<p>I had this:</p>
<pre><code>ALLOWED_HOSTS = 'www.example.com'
</code></pre>
<p>instead of:</p>
<pre><code>ALLOWED_HOSTS = ['www.example.com']
</code></pre>
Server Migration Planhttps://feeding.cloud.geek.nz/posts/server-migration-plan/
<a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>
2021-06-11T20:43:57Z2013-08-08T12:20:00Z
<p>I recently had to migrate the main <a href="https://www.libravatar.org">Libravatar</a> server to a new virtual
machine. In order to minimize risk and downtime, I decided to write a
<a href="http://zoompf.com/2013/08/you-need-a-website-migration-plan">migration plan</a>
ahead of time.</p>
<p>I am sharing this plan here in case it gives any ideas to others who have to
go through a similar process.</p>
<h1 id="Prepare_DNS">Prepare DNS</h1>
<ul>
<li>Change the TTL on the DNS entries for <code>libravatar.org</code> and <code>libravatar.com</code> (i.e. bare <code>A</code> and <code>AAAA</code> records) to <strong>3600</strong> seconds.</li>
<li>Remove the mirrors I don't control from the DNS load balancer (<code>cdn</code> <strong>and</strong> <code>seccdn</code>).</li>
<li>Remove the main server from <code>cdn</code> and <code>seccdn</code> in DNS.</li>
</ul>
<h1 id="Preparing_the_new_server">Preparing the new server</h1>
<ul>
<li><a href="http://wiki.libravatar.org/setup_instructions/">Setup the new server</a>.</li>
<li>Copy the database from the old site and restore it.</li>
<li>Copy <code>/var/lib/libravatar</code> from the old site.</li>
<li><p>Hack my local <code>/etc/hosts</code> file to point to the new server's IP address:</p>
<pre><code>xxx.xxx.xxx.xxx www.libravatar.org stats.libravatar.org cdn.libravatar.org
</code></pre></li>
<li><p>Test all functionality on the new site.</p></li>
</ul>
<h1 id="Preparing_the_old_server">Preparing the old server</h1>
<ul>
<li><p>Prepare a static "under migration" Apache config in <code>/etc/apache2/sites-enabled.static/default.conf</code>:</p>
<pre><code><VirtualHost *:80>
RewriteEngine On
RewriteRule ^ https://www.libravatar.org [redirect=301,last]
</VirtualHost>
<VirtualHost *:443>
SSLEngine on
SSLCertificateFile /etc/libravatar/www.crt
SSLCertificateKeyFile /etc/libravatar/www.pem
SSLCertificateChainFile /etc/libravatar/www-chain.pem
RewriteEngine On
RewriteRule ^ /var/www/html/migration.html [last]
<Directory /var/www/html>
Allow from all
Options -Indexes
</Directory>
</VirtualHost>
</code></pre></li>
<li><p>Put this static file in /var/www/html/migration.html:</p>
<pre><code><html>
<body>
<p>We are migrating to a new server. See you soon!</p>
<p>- <a href="https://identi.ca/libravatar">@libravatar</a></p>
</body>
</html>
</code></pre></li>
<li><p>Enable the rewrite module:</p>
<pre><code>a2enmod rewrite
</code></pre></li>
<li><p>Prepare an Apache config proxying to the new server in <code>/etc/apache2/sites-enabled.proxy/default.conf</code>:</p>
<pre><code><VirtualHost *:80>
RewriteEngine On
RewriteRule ^ https://www.libravatar.org [redirect=301,last]
</VirtualHost>
<VirtualHost *:443>
SSLEngine on
SSLCertificateFile /etc/libravatar/www.crt
SSLCertificateKeyFile /etc/libravatar/www.pem
SSLCertificateChainFile /etc/libravatar/www-chain.pem
SSLProxyEngine on
ProxyPass / https://www.libravatar.org/
ProxyPassReverse / https://www.libravatar.org/
</VirtualHost>
</code></pre></li>
<li><p>Enable the proxy-related modules for Apache:</p>
<pre><code>a2enmod proxy
a2enmod proxy_connect
a2enmod proxy_http
</code></pre></li>
</ul>
<h1 id="Migrating_servers">Migrating servers</h1>
<ul>
<li><p><a href="https://twitter.com/libravatar/status/1028767128227205120">Tweet</a> and <a href="https://identi.ca/libravatar/note/UFBI9ne8SsOftkYlSKPHQQ">dent</a> about the upcoming migration.</p></li>
<li><p>Enable the static file config on the old server (disabling the Django app):</p>
<pre><code>cd /etc/apache2/
mv sites-enabled sites-enabled.django
mv sites-enabled.static sites-enabled
apache2ctl configtest
systemctl restart apache2.service
</code></pre></li>
<li><p>Disable pgbouncer to ensure that Django cannot access postgres anymore:</p>
<pre><code>systemctl stop pgbouncer.service
</code></pre></li>
<li><p>Copy the database from the old server and restore it on the new server <strong>making sure it's in the UTF8 encoding</strong>:</p>
<pre><code>dropdb libravatar
createdb -O djangouser -E utf8 libravatar
pg_restore -d libravatar libravatar20180812.pg
</code></pre></li>
<li><p>Copy <code>/var/lib/libravatar</code> from the old server to the new one.</p>
<ul>
<li><p>On the new server:</p>
<pre><code>chmod a+w /var/lib/libravatar/avatar
rm -rf /var/lib/libravatar/avatar/*
chmod a+w /var/lib/libravatar/user
rm -rf /var/lib/libravatar/user/*
</code></pre></li>
<li><p>From laptop:</p>
<pre><code>rsync -a -H -v old.libravatar.org:/var/lib/libravatar/avatar .
rsync -a -H -v old.libravatar.org:/var/lib/libravatar/user .
rsync -a -H -v avatar/* new.libravatar.org:/var/lib/libravatar/avatar/
rsync -a -H -v user/* new.libravatar.org:/var/lib/libravatar/user/
</code></pre></li>
<li><p>On the new server:</p>
<pre><code>chmod go-w /var/lib/libravatar/avatar
chmod go-w /var/lib/libravatar/user
chown -R root:root /var/lib/libravatar/avatar/* /var/lib/libravatar/user/*
</code></pre></li>
</ul>
</li>
</ul>
<h1 id="Disable_mirror_sync">Disable mirror sync</h1>
<ul>
<li>Log into each mirror and comment out the update cron jobs in <code>/etc/cron.d/libravatar-slave</code>.</li>
<li>Make sure mirrors are no longer able to connect to the old server by moving <code>/var/lib/libravatar/master/.ssh/authorized_keys</code> to the new server and removing it from the old server.</li>
</ul>
<h1 id="Testing_the_main_site">Testing the main site</h1>
<ul>
<li><p>Hack my local <code>/etc/hosts</code> file to point to the new server's IPv4 address:</p>
<pre><code>xxx.xxx.xxx.xxx www.libravatar.org stats.libravatar.org cdn.libravatar.org seccdn.libravatar.org
</code></pre></li>
<li><p>Test all functionality on the new site.</p></li>
<li>Do a basic version of the previous test using IPv6.</li>
<li><p>If testing is successful, update DNS A and AAAA records (<code>libravatar.org</code> and <code>libravatar.com</code>) to point to the new server with a short TTL (in case we need to revert).</p></li>
<li><p>Enable the proxy config on the old server.</p>
<pre><code>cd /etc/apache2/
mv sites-enabled sites-enabled.static
mv sites-enabled.proxy/ sites-enabled
apache2ctl configtest
systemctl restart apache2.service
</code></pre></li>
<li><p>Hack my local <code>/etc/hosts</code> file to point to the old server's IP address.</p></li>
<li>Test basic functionality going through the proxy.</li>
<li>Remove local <code>/etc/hosts</code> hacks.</li>
</ul>
<h1 id="Re-enable_mirror_sync">Re-enable mirror sync</h1>
<ul>
<li>Build a new <code>libravatar-slave</code> package with an updated <code>known_hosts</code> file for the new server.</li>
<li>Log into each server I control and update that package.</li>
<li><p>Test the connection to the master (hacking <code>/etc/hosts</code> on the mirror if needed):</p>
<pre><code>sudo -u libravatar-slave ssh libravatar-master@0.cdn.libravatar.org
</code></pre></li>
<li><p>Uncomment the sync cron jobs in <code>/etc/cron.d/libravatar-slave</code>.</p></li>
<li>An hour later, make sure that new images are copied over and that the TLS certs are still working.</li>
<li>Remove <code>/etc/hosts</code> hacks from all mirrors.</li>
</ul>
<h1 id="Post_migration_steps">Post migration steps</h1>
<ul>
<li><a href="https://twitter.com/libravatar/status/364685629918949376">Tweet</a> and <a href="https://identi.ca/libravatar/note/wIyaqgYjSu-ig_FDXLI8rA">dent</a> about the fact that the migration was successful.</li>
<li><p>Send a test email to the support address included in the tweet/dent.</p></li>
<li><p>Take a backup of config files and data on the old server in case I forgot to copy something to the new one.</p></li>
<li><p>Get in touch with mirror owners to tell them to update <code>libravatar-slave</code> package and test ssh configuration.</p></li>
<li><p>Add third-party controlled mirrors back to the DNS load-balancer once they are up to date.</p></li>
<li><p>A few days later, change the TTL for the main site back to 43200 seconds.</p></li>
<li>A week later, kill the proxy on the old server by shutting it down.</li>
</ul>
Migrating Libravatar to the Persona Observer APIhttps://feeding.cloud.geek.nz/posts/migrating-libravatar-to-persona/
<a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>
2021-06-11T20:43:57Z2012-08-01T10:40:00Z
<p><a href="https://www.libravatar.org">Libravatar</a> recently upgraded its support for the <a href="https://login.persona.org">Persona</a> authentication system (formerly BrowserID).</p>
<p>Here are some notes on what was involved in migrating to the <a href="http://identity.mozilla.com/post/28513408358/a-new-api-for-persona">Observer API</a> for those who want to do the same on their sites.</p>
<h3 id="Moving_away_from_hidden_forms">Moving away from hidden forms</h3>
<p>Libravatar used to <code>POST</code> the user's assertion to the server-side verification code through a hidden HTML form, just like the <a href="https://github.com/mozilla/browserid-cookbook/blob/6b5292f9cdf4f25cb37dca5dcd91dcdaa3efaee6/python/python.cgi#L26">example Python CGI</a> from the <a href="https://github.com/mozilla/browserid-cookbook">BrowserID cookbook</a>.</p>
<p>This was a reasonable solution when the Persona code was only needed on a handful of pages, but the new API recommends loading the code on all pages where users can be logged in. Therefore, instead of copying this hidden form into the base template and including it on every page, I decided to <a href="https://git.nzoss.org.nz/libravatar/libravatar/commit/8fc6cab6186052dbdb1dee379141114f6b272233">switch to a jQuery.post()-based solution</a> prior to making any other changes.</p>
<p>As a side-effect of interacting with the backend in an <a href="https://en.wikipedia.org/wiki/Ajax_%28programming%29">AJAX</a> call, the error pages were converted to <a href="https://en.wikipedia.org/wiki/JSON">JSON</a> structures and are now displayed in a popup alert.</p>
<h3 id="From_.get.28.29_to_.watch.28.29_and_.request.28.29">From .get() to .watch() and .request()</h3>
<p>By far the biggest change that the new API requires is the move from <a href="https://developer.mozilla.org/en-US/docs/DOM/navigator.id.get">navigator.id.get()</a> to <a href="https://developer.mozilla.org/en-US/docs/DOM/navigator.id.watch">navigator.id.watch()</a> and <a href="https://developer.mozilla.org/en-US/docs/DOM/navigator.id.request">navigator.id.requets()</a>. Instead of asking for an assertion to verify, two callbacks are registered through <code>watch()</code> and identification is triggered through <code>request()</code> (which triggers the <code>onlogin</code> callback).</p>
<p>In the case of Libravatar, this involved:</p>
<ul>
<li>including the <a href="https://login.persona.org/include.js">Persona Javascript shim</a> on every page</li>
<li>moving the assertion verification code from the <code>get()</code> callback to the new <code>onlogin</code> callback</li>
<li>adding a redirection to the <a href="https://www.libravatar.org/account/logout">existing logout page</a> from the new <code>onlogout</code> callback</li>
<li>sharing part of the session state (i.e. which user is currently logged in, if any) with Persona through the <code>loggedInEmail</code> option to <code>watch()</code></li>
</ul>
<p>One thing to note is that while <a href="https://github.com/mozilla/browserid/pull/1806"><code>loggedInEmail</code> is going to be renamed to <code>loggedInUser</code></a>, this change hasn't hit the production version of Persona yet and so I <a href="https://git.nzoss.org.nz/libravatar/libravatar/commit/d39673bda7acfa615b52c9eaba98b565c14bcbf3">reverted to the old name</a> after noticing that <a href="https://github.com/mozilla/browserid/issues/2145"><code>onlogin</code> was unnecessarily called on every page load</a> (a fairly expensive operation given the need to transmit and verify the assertion server-side).</p>
<h3 id="Simplifying_Content_Security_Policy_headers">Simplifying Content Security Policy headers</h3>
<p>The <a href="https://feeding.cloud.geek.nz/2011/11/using-browserid-and-content-security.html">CSP headers</a> that Libravatar used to set on the pages that made use of the Persona Javascript shim now need to be set on every page, which is actually a nice <a href="https://git.nzoss.org.nz/libravatar/libravatar/commit/ae4c4db7859193eed6c85d820d95a1134599f21c">simplication of our Apache config</a>.</p>
<p>Note that if your CSP headers still refer to <code>browserid.org</code>, you must <a href="https://mail.mozilla.org/pipermail/persona-notices/2012/000001.html">change them to <code>login.persona.org</code></a>.</p>
<h3 id="Letting_Persona_know_about_changes_in_login_state">Letting Persona know about changes in login state</h3>
<p>One important change with respect to the old API is that Persona now keeps track of the login state for your site. If Persona finds a discrepancy between its idea of what your state should be and what you are advertising, it will trigger the appropriate callback (<code>onlogin</code> or <code>onlogout</code>) and attempt to resolve the conflict.</p>
<p>This is a very important feature since it will enable features like global logout and persistent cross-device logins, but it does mean that you have to notify Persona whenever your login state changes. If you forget to do this, your state will be automatically changed to match what Persona expects to see.</p>
<p>In Libravatar, this means that when users delete their account, we need to kill their session and <a href="https://git.nzoss.org.nz/libravatar/libravatar/commit/96513befe1668f0759968bd8417eb8c7e8fbde09">tell Persona about it</a> (through <a href="https://developer.mozilla.org/en/DOM/navigator.id.logout">navigator.id.logout()</a>). Otherwise, Persona will log them in again, which will of course cause a new account to be provisioned.</p>
<h3 id="Working_around_the_internal_login_state">Working around the internal login state</h3>
<p>The most complicated part of this migration to the new API was around our "add email" functionality, which lets users add extra emails to their existing Libravatar account.</p>
<p>With the old <code>get()</code> API, adding emails was as easy as requesting additional assertions and verifying them. Under the Observer API, requesting an assertion also changes the internal state that Persona keeps for that website. In practice, it means that after adding a new email in Libravatar, we need to <a href="https://git.nzoss.org.nz/libravatar/libravatar/commit/8b4dff7bbd491366fd75fa1fd7309ff1992e6e4e">update the "logged in" identifier</a> to match the new one. Failure to do this will prompt Persona to invoke the <code>onlogout</code> callback with a different email, which will cause the email to get added to a new Libravatar account instead.</p>
<p>There are also two corner cases where Libravatar needs to fallback to its manual authentication backend and tell Persona that nobody is logged in:</p>
<ul>
<li>when users <a href="https://git.nzoss.org.nz/libravatar/libravatar/commit/0c597856c35c4138613ace28770bf8a1b27756c4">remove from their account</a> the email address that their Persona session is tied to</li>
<li>when users <a href="https://git.nzoss.org.nz/libravatar/libravatar/commit/f2b8772a0ac7766e7f4443c4c999e12914a2e940">unsuccessfully attempt to add</a> an email that's already claimed by another account</li>
</ul>
<p>In any case, despite these hacks, I got it all working in the end which is why I'm hopeful that <a href="https://github.com/mozilla/browserid/issues/2152">we'll find a way to support this use case</a>.</p>
<h3 id="Taking_advantage_of_the_new_features">Taking advantage of the new features</h3>
<p>The most visible feature that the new API brings (as <a href="https://git.nzoss.org.nz/libravatar/libravatar/commit/bccf97a3709dec79d6ac6d7848c9f8851f31f19f">options to <code>request()</code></a>) is the <a href="http://identity.mozilla.com/post/27122712140/new-feature-adding-your-websites-name-and-logo-to-the">ability to add your name and logo</a> to the Persona popup window:</p>
<p><img alt="" src="https://feeding.cloud.geek.nz/posts/migrating-libravatar-to-persona/persona_branded_popup.png" /></p>
<p>The second feature <a href="https://git.nzoss.org.nz/libravatar/libravatar/commit/984e51875b215156b4b85d465bfd0b17ef3c9628">I tried to enable</a> on Libravatar is the new <a href="http://identity.mozilla.com/post/27914354400/improvements-to-the-first-time-sign-up-flow">redirectTo</a> <code>request()</code> option. Unfortunately, I had to <a href="https://git.nzoss.org.nz/libravatar/libravatar/commit/b6d5919f9aaa31f8ba40deecec64aeafb5189632">revert this change</a> since in our case, going straight to the profile page causes the <a href="https://docs.djangoproject.com/en/1.2/topics/auth/#the-login-required-decorator">@login_required</a> Django decorator to run before the <code>onlogin</code> callback has a chance to set the session cookie.</p>
<p>In any case, redirecting to the login page already worked and so Libravatar probably doesn't need to make use of this Persona feature.</p>
<h3 id="Conclusion">Conclusion</h3>
<p>This migration was harder than I was expecting, but I'm confident that it will become easier in the next few weeks as the <a href="https://github.com/mozilla/browserid">implementation</a> is polished and <a href="https://developer.mozilla.org/en-US/docs/browserid">documentation</a> refreshed. I'm very excited about the Observer API because of the new security features and native integration it will enable.</p>
<p>If you use Persona on your site, make sure you sign up to the new <a href="https://mail.mozilla.org/listinfo/persona-notices">service announcement</a> list.</p>
Optimising PNG fileshttps://feeding.cloud.geek.nz/posts/optimising-png-files/
<a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>
2021-06-11T20:43:57Z2011-12-04T05:30:00Z
<p>I have written about <a href="https://feeding.cloud.geek.nz/2009/10/reducing-website-bandwidth-usage.html">using lossless optimisations techniques to reduce the size of images</a> before, but I recently learned of a few other <a href="http://developer.yahoo.com/yslow/smushit/faq.html#faq_crushtool">tools</a> to further reduce the size of <a href="http://en.wikipedia.org/wiki/Portable_Network_Graphics">PNG</a> images.</p>
<h3 id="Basic_optimisation">Basic optimisation</h3>
<p>While you could use <a href="http://www.smushit.com/">Smush.it</a> to manually optimise your images, if you want a single Open Source tool you can use in your scripts, <a href="http://optipng.sourceforge.net/">optipng</a> is the most effective one:</p>
<pre><code>optipng -o9 image.png
</code></pre>
<h3 id="Removing_unnecessary_chunks">Removing unnecessary chunks</h3>
<p>While not as effective as optipng in its basic optimisation mode, <a href="http://pmt.sourceforge.net/pngcrush/">pngcrush</a> can be used remove <a href="http://en.wikipedia.org/wiki/Portable_Network_Graphics#Ancillary_chunks">unnecessary chunks</a> from PNG files:</p>
<pre><code>pngcrush -q -rem gAMA -rem alla -rem text image.png image.crushed.png
</code></pre>
<p>Depending on the software used to produce the original PNG file, this can yield significant savings so I usually start with this.</p>
<h3 id="Reducing_the_colour_palette">Reducing the colour palette</h3>
<p>When optimising images uploaded by users, it's not possible to know whether or not the palette size can be reduced without too much quality degradation. On the other hand, if you are optimising your own images, it might be worth trying this lossy optimisation technique.</p>
<p>For example, <a href="http://cdn.libravatar.org/nobody/512.png">this image</a> went from 7.2 kB to 5.2 kB after running it through <a href="http://pngnq.sourceforge.net/">pngnq</a>:</p>
<pre><code>pngnq -f -n 32 -s 3 image.png
</code></pre>
<h3 id="Re-compressing_final_image">Re-compressing final image</h3>
<p>Most PNG writers use <a href="http://zlib.net/">zlib</a> to compress the final output but it turns out that there are better algorithms to do this.</p>
<p>Using <a href="http://advancemame.sourceforge.net/comp-readme.html">AdvanceCOMP</a> I was able to bring the <a href="http://cdn.libravatar.org/nobody/512.png">same image</a> as above from 5.1kB to 4.6kB:</p>
<pre><code>advpng -z -4 image.png
</code></pre>
<h3 id="When_the_source_image_is_an_SVG">When the source image is an SVG</h3>
<p>Another thing I noticed while optimising PNG files is that rendering a PNG of the right size straight from an SVG file produces a smaller result than exporting a large PNG from that same SVG and then resizing the PNG to smaller sizes.</p>
<p>Here's how you can use <a href="http://inkscape.org/">Inkscape</a> to generate an 80x80 PNG:</p>
<pre><code>inkscape --without-gui --export-width=80 --export-height=80 --export-png=80.png image.svg
</code></pre>
Using BrowserID and Content Security Policy togetherhttps://feeding.cloud.geek.nz/posts/using-browserid-and-content-security/
<a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>
2021-06-11T20:43:57Z2011-11-01T09:40:00Z
<p>While looking into why <a href="https://browserid.org/">BrowserID</a> logins on <a href="https://www.libravatar.org/">Libravatar</a> didn't work on <a href="http://firefox.com/">Firefox</a>, I remembered that I had recently added <a href="https://developer.mozilla.org/en/Security/CSP">Content Security Policy</a> headers. Here's what I had to do to make BrowserID work on a CSP-enabled site.</p>
<h3 id="Create_a_hidden_form_and_a_login_link">Create a hidden form and a login link</h3>
<p>This is what the login button looked like before CSP:</p>
<pre><code><form id="browserid-form" action="/account/login_browserid" method="post">
<input id="browserid-assertion" type="hidden" name="assertion" value="">
<input style="display: none" type="submit">
</form>
<a href="javascript:try_browserid()">Login with BrowserID</a>
<script type="text/javascript">
function try_browserid() {
navigator.id.getVerifiedEmail(function(assertion) {
if (assertion) {
document.getElementById('browserid-assertion').setAttribute('value', assertion);
document.getElementById('browserid-form').submit();
}
});
}
</script>
</code></pre>
<p>The hidden form is there because the assertion needs to be sent to the application via <code>POST</code> to avoid leaking it out, but otherwise the code is pretty straightforward.</p>
<p>Now of course, with CSP turned ON, there are two problems:</p>
<ul>
<li>links <a href="https://developer.mozilla.org/en/Security/CSP/Default_CSP_restrictions#javascript:.C2.A0URIs">cannot use</a> <code>javascript:</code> URIs</li>
<li>inline Javascript is <a href="https://developer.mozilla.org/en/Security/CSP/Default_CSP_restrictions#Internal_.3Cscript.3E_nodes">forbidden</a></li>
</ul>
<p>So we can start by converting the login link to:</p>
<pre><code><a id="browserid-link" href="#">Login with BrowserID</a>
<script src="browserid_stuff.js" type="text/javascript">
</code></pre>
<p>then moving the <code>try_browserid()</code> function to a separate file to be served from the same domain and finally hooking it up to the <code>try_browserid()</code> function using Javascript (in that same <code>browserid_stuff.js</code> file):</p>
<pre><code>var link = document.getElementById('browserid-link');
link.onclick = try_browserid;
link.addEventListener('click', try_browserid, false);
</code></pre>
<h3 id="Exposing_the_right_X-Content-Security-Policy_header">Exposing the right X-Content-Security-Policy header</h3>
<p>In order to load the Javascript code from <code>browserid.org</code>, we need the following as part of the policy:</p>
<pre><code>script-src https://browserid.org
</code></pre>
<p>but that's not enough since the BrowserID login form seems to use some sort of <code><iframe></code> trick and so we need to add this extra permission as well:</p>
<pre><code>frame-src https://browserid.org
</code></pre>
<p>Here is the final policy I ended up setting (using Apache <a href="http://httpd.apache.org/docs/2.2/mod/mod_headers.html">mod_headers</a>) for the Libravatar login page:</p>
<pre><code><Location /account/login>
Header set X-Content-Security-Policy: "default-src 'self'; frame-src 'self' https://browserid.org ; script-src 'self' https://browserid.org"
</Location>
</code></pre>
Reducing the size of Apache 301 and 302 responseshttps://feeding.cloud.geek.nz/posts/reducing-size-of-apache-301-and-302/
<a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>
2021-06-11T20:43:57Z2011-10-22T23:50:00Z
<p>Looking through the <a href="https://www.libravatar.org/">Libravatar</a> access logs, I found that most of the traffic we currently serve consists of 302 redirects to Gravatar. Optimising that path is therefore very important.</p>
<p>While Apache allows admins to provide <a href="https://httpd.apache.org/docs/2.2/mod/core.html#errordocument">custom error pages</a> for things like 404 or 500, it's not quite that straightforward for 30x return codes.</p>
<h3 id="Standard_301_.2F_302_responses">Standard 301 / 302 responses</h3>
<p>By default, Apache (and most web servers out there) returns a fairly large HTML page along with a 30x redirection. Try it for yourself by disabling automatic redirections in Firefox (Preferences | Advanced | General | Accessibility) or by installing the <a href="https://addons.mozilla.org/en-US/firefox/addon/requestpolicy/?src=search">Request Policy</a> add-on.</p>
<p>The 302 responses sent by Libravatar looked like this:</p>
<pre><code>$ curl -i http://cdn.libravatar.org/avatar/12345678901234567890123456789012
HTTP/1.1 302 Found
Date: Wed, 21 Sep 2011 01:51:52 GMT
Server: Apache
Cache-Control: max-age=86400
Location: http://www.gravatar.com/avatar/12345678901234567890123456789012.jpg?r=g&s=80&d=http://cdn.libravatar.org/nobody/80.png
Vary: Accept-Encoding
Content-Length: 310
Content-Type: text/html; charset=iso-8859-1
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>302 Found</title>
</head><body>
<h1>Found</h1>
<p>The document has moved <a href="http://www.gravatar.com/avatar/12345678901234567890123456789012.jpg?r=g&s=80&d=http://cdn.libravatar.org/nobody/80.png">here</a>.</p>
</body></html>
</code></pre>
<p>As you can see, the body of the response is just as large as the headers and isn't really necessary.</p>
<h3 id="Body-less_301_responses">Body-less 301 responses</h3>
<p>After reading about the <a href="https://httpd.apache.org/docs/2.2/custom-error.html">ErrorDocument directive</a>, I created an empty file called <code>302</code> in the root of the web server and included this directive in my vhost configuration file:</p>
<pre><code>ErrorDocument 302 /302
</code></pre>
<p>which made the responses look like this:</p>
<pre><code>$ curl -i http://example.com/redir
HTTP/1.1 302 Found
Date: Wed, 21 Sep 2011 03:39:26 GMT
Server: Apache
Last-Modified: Wed, 21 Sep 2011 03:39:17 GMT
ETag: "8024d-0-4ad6b52201036"
Accept-Ranges: bytes
Content-Length: 0
Content-Type: text/plain
</code></pre>
<p>This one does have a completely empty body, however, there's an important problem with this solution: the <code>Location</code> header is missing! Not much point in reducing the size of the redirect if it's no longer working.</p>
<h3 id="Custom_302_response_page">Custom 302 response page</h3>
<p>The next thing I tried (and ended up settling on) is this:</p>
<pre><code>ErrorDocument 302 " "
</code></pre>
<p>which results in a 1-byte response (a single space) in the body:</p>
<pre><code>$ curl -i http://example.com/redir
HTTP/1.1 302 Found
Date: Wed, 21 Sep 2011 03:37:50 GMT
Server: Apache
Location: http://www.example.com
Vary: Accept-Encoding
Content-Length: 1
Content-Type: text/html; charset=iso-8859-1
</code></pre>
<p>There is still a little bit of unnecessary information in this response (character set, <code>Vary</code> and <code>Server</code> headers), but it's a major improvement over the original.</p>
<p>If you know of any other ways to reduce this further, please leave a comment!</p>
Adding X-Content-Security-Policy headers in a Django applicationhttps://feeding.cloud.geek.nz/posts/adding-x-content-security-policy/
<a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>
2021-06-11T20:43:57Z2011-09-18T08:30:00Z
<p>Content Security Policy is a <a href="http://www.w3.org/Security/wiki/Content_Security_Policy">proposed HTTP extension</a> which allows websites to restrict the external content that can be displayed by visiting web browsers. By expressing a set of rules to be enforced by the browser, a website is able to prevent the injection of outside resources by malicious users.</p>
<p>While adding support for the <a href="https://dvcs.w3.org/hg/content-security-policy/raw-file/tip/csp-unofficial-draft-20110303.html">March 2011 draft</a> in <a href="https://www.libravatar.org/">Libravatar</a>, I looked at three different approaches.</p>
<h3 id="Controlling_the_headers_in_the_application">Controlling the headers in the application</h3>
<p>The first approach I considered was to have the Django application output all of the headers, which is what the <a href="https://github.com/mozilla/django-csp">django-csp</a> module does. Unfortunately, I need to be able to vary the policy between pages (the views in Libravatar have different requirements) and that's one of the things that hasn't been implemented yet in that module.</p>
<p>Producing the same headers by hand is fairly simple:</p>
<pre><code>response = render_to_response('app/view.html')
response['X-Content-Security-Policy'] = "allow 'self'"
return response
</code></pre>
<p>but it would mean adding a bit of code to every view and/or writing a custom wrapper for <code>render_to_response()</code>.</p>
<h3 id="Setting_a_default_header_in_Apache">Setting a default header in Apache</h3>
<p>Ideally, I'd like to be able to set a default header in Apache using <a href="https://httpd.apache.org/docs/2.2/mod/mod_headers.html#header">mod_headers</a> and then override it as needed inside the application.</p>
<p>The first problem with this solution is that it's not possible (as far I can tell) for a Django application to override a header set by Apache:</p>
<ul>
<li>mod_headers adds its response header after <a href="http://code.google.com/p/modwsgi/">mod_wsgi</a> has returned (unless <a href="https://httpd.apache.org/docs/2.2/mod/mod_headers.html#early">early processing</a> is used).</li>
<li>Django's <a href="https://docs.djangoproject.com/en/1.3/ref/request-response/#django.http.HttpResponse">response objects</a> cannot see the early headers set by Apache.</li>
</ul>
<p>The second problem is that mod_headers <a href="https://issues.apache.org/bugzilla/show_bug.cgi?id=51842">doesn't have</a> an <em>action</em> that adds/sets a header only if it didn't already exist. It does have <code>append</code> and <code>merge</code> actions which could in theory be used to add extra terms to the policy but it unfortunately uses a different separator (the comma) from the CSP spec (which uses semi-colons).</p>
<h3 id="Always_set_headers_in_Apache">Always set headers in Apache</h3>
<p>While I would have liked to get the second approach working, in the end, I included all of the <a href="https://developer.mozilla.org/en/Security/CSP/CSP_policy_directives">CSP directives</a> within the main Apache config file:</p>
<pre><code>Header set X-Content-Security-Policy: "allow 'self'; options inline-script; img-src 'self' data:"
<Location /account/confirm_email>
Header set X-Content-Security-Policy: "allow 'self'; options inline-script; img-src *"
</Location>
<Location /tools/check>
Header set X-Content-Security-Policy: "allow 'self'; options inline-script; img-src *"
</Location>
</code></pre>
<p>The first <code>Header</code> call sets a default policy which is later overriden based on the path to the Django view that's being used.</p>
<h3 id="Related_technologies">Related technologies</h3>
<p>If you are interested in Content Security Policy, you may also want to look into <a href="http://noscript.net/abe/web-authors.html">Application Boundaries Enforcer</a> (part of the <a href="https://addons.mozilla.org/en-US/firefox/addon/noscript/">NoScript</a> Firefox extension) for more security rules that can be supplied by the server and enforced client-side.</p>
<p>It's also worth mentioning the excellent <a href="https://addons.mozilla.org/en-US/firefox/addon/requestpolicy/">Request Policy</a> extension which solves the same problem by letting users whitelist the cross-site requests they want to allow.</p>
Translating Django applications using Launchpadhttps://feeding.cloud.geek.nz/posts/translating-django-applications-using/
<a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>
2021-06-11T20:43:57Z2011-08-01T23:58:00Z
<p>One of my <a href="http://www.djangoproject.com">Django</a>-based projects, <a href="https://www.libravatar.org">Libravatar</a>, makes use of Launchpad's interface to keep its <a href="https://translations.launchpad.net/libravatar">translations</a> current. Unfortunately, the <a href="https://docs.djangoproject.com/en/1.3/howto/i18n/">PO file layout</a> that Django insists on using isn't directly compatible with <a href="https://help.launchpad.net/Translations/YourProject/ImportPolicy#Sample%20directory%20layout">the one that Launchpad needs</a> in order to setup <a href="https://help.launchpad.net/Translations/YourProject/ImportingTemplates#Enabling%20automatic%20template%20imports">automatic import</a> of translations on a branch.</p>
<p>The solution I found is to use the mandatory Launchpad layout and then create symlinks in the right places for Django.</p>
<p>Here is where the Libravatar PO files are located in the <a href="https://code.launchpad.net/~libravatar/libravatar/trunk">bzr repository</a>:</p>
<pre><code>po/libravatar/de.po
po/libravatar/fr.po
po/libravatar/libravatar.pot
</code></pre>
<p>and here are the (relative) symbolic links:</p>
<pre><code>libravatar/locale/de/LC_MESSAGES/django.po
-> ../../../../po/libravatar/de.po
libravatar/locale/en/LC_MESSAGES/django.po
-> ../../../../po/libravatar/libravatar.pot
libravatar/locale/fr/LC_MESSAGES/django.po
-> ../../../../po/libravatar/fr.po
</code></pre>
<p>Note that the "en" localization is left untranslated and used as the translation template (POT file).</p>