Your browser (Internet Explorer 6) is out of date. It has known security flaws and may not display all features of this and other websites. Learn how to update your browser.

The technology on the blog bit rotted quite a bit lately. And of course it
has not seen any update or new content in years, but that is an entirely
different story.

To make sure the content still looks good in modern browsers, I revamped some
of the tech behind the site a bit now. For starters, I removed Cufon - a
library to render fancy texts client side. I did not use it a lot and it made
some text on the page look blurred on my phone.

I also switched from pre-rendered images for math display to using MathJax.
MathJax is not fully convincing - pretty heavyweight in terms of size, tons of
features I do not need and rather slow rendering. But it works well on my
phone and desktop and the math looks better than the images did, so I'll stick
with it for now.

So, it took me two weeks to finally get around to a status report, and Peter
even had to request it forcefully. But I have not been idle in this time.
Let's see.

Pango? Harfbuzz? SDL_TTF!

I spent quite some time investigating Pango and Harfbuzz, I blogged about it
in my previous post. I knew back then already that Pango was not ideal and
that even Wesnoth was not using Pango for their fancy help rendering (they use
if for some other types of text though). I then investigated Harfbuzz further
and though it seems like it will one day become what we need, it is not there.
In fact, it does not even have a stable version currently released. It can handle
international text pretty well, but it does not help in the left-to-right,
top-to-bottom rendering of text, nor with the breaking of text into
paragraphs.

So we are back to square one: The old-and-trusted SDL_TTF. I designed my code
around SDL as graphics engine, the text rendering is kinda plugged in and uses
SDL_TTF, but it should be easy to replace this with any other text engine when
one establishes itself as the one - be it Pango or Harfbuzz or anything else
in the future.

What works?

I just finished my initial implementation of the engine - It has all the
feature that Venatrix and myself have marked as desirable. Some of the
feature (like tables) are rather rudimentary, but I think it should be enough
for us. The whole engine is currently a stand alone library - in the next
step, I will plug it into the Widelands code.

But let's move on to pretty pictures. Here are some examples.

<rt><subpadding=5background=000000><subwidth=350background=win_bg.pngpadding=2><palign=center><fontsize=30color=F2DF91underline=true>This is a heading</font></p><vspacegap=2><submargin=5padding=5background="ffffff"float=left><p><imgsrc="wl-ico-128.png"></p><palign=center><fontsize=10>This is the subtext for the image. Kinda like a
caption.</font></p></sub><pindent=5><fontcolor=FEFE00size=14>
Lorem ipsum dolor sit amet, consetetur sadipscing elitr, <fontitalic=trueref="hyperlink">sed diam nonumy</font> eirmod tempor <fontunderline=1ref="somereference">invidunt ut labore et</font> dolore
magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo
<fontbold=true>duo dolores et ea rebum</font>. Stet clita kasd
gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet.
</font></p></sub></sub></rt>

This code renders to the following picture:

This already shows off some of the features: changing fonts inside a
paragraph, sub layouts that can float inside the text. Margins and paddings
for some elements (though this is not as fully featured as in HTML) and of
course images. It also contains a ref attribute for a font tag which makes
this part of the text clickable. This will make hyperlinks in the help
possible.

Each row is a set of three fixed width cells separated by a <space> tag which
expands as needed (in this case not at all). The lines are separated by a
linebreak tag <br>. This kinda results in a table with fixed column count
and width. Good enough for Widelands I hope:

Bullet point lists are possible as well:

<rt><p><fontsize=20underline=1>A bullet list</font></p><subpadding=10width=490><p><subvalign=top><p><spacefill=" "><imgsrc=bullet.png></p></sub><subwidth=450><p>
Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy
eirmod tempor.
</p></sub><subvalign=top><p><spacefill=" "><imgsrc=bullet.png></p></sub><subwidth=450><p>
Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy
eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam
voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet
clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit
amet.
</p></sub><subvalign=top><p><spacefill=" "><imgsrc=bullet.png></p></sub><subwidth=450><p>
Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy
eirmod tempor.
</p></sub></p></sub></rt>

This uses a lot of sub layouts to achieve the desired result. It does not read
very nice, but which markup does? I plan to hide this all in some Lua
functions, just as we are already doing for scenarios to make it all nice to use as
well. This wall of text renders as

What's next?

The next step is to integrate this layout engine into Widelands. Caching will
be really important here, because rendering with SDL_TTF is very slow. I feel
that the technical challenges of getting this engine on board will be
surmountable, the one thing I am really not looking forward to is changing all
rich texts that we already have to work with the new engine. Especially
the hard coded texts inside the source, the files in txts/ and the early
campaigns (barbarians and empire) will be a lot of tedious work. I hope some
people hop on board to help me out with that.

Soo, expect a branch with this engine soonish on launchpad.

So, I finally handed in my PhD thesis. That does not mean that everything is said
and done, there is still an oral examination, a 30 minutes talk and some
demonstration stuff to be done. But it means that I am now officially out of
my previous job.

That means, before I will start at Google in September, I have around three
months to use for whatever I see fit (okay, okay, I also need to find a flat,
move, prepare for the oral examination, prepare the demo, and organize a huge
celebration. And I have some other obligations remaining). I want to use a lot
of time for Widelands in these three months. To keep me on track and
motivated, I decided to keep a development diary on this blog.

First Project: Make the Widelands Help awesome

Venatrix, Astuur and others have begun the immense project to write a help
for Widelands. I have provided a minimal framework to give buildings a help
window via a Lua script in the building's directory. This is already immensely
useful and significantly better than no help at all, but it feels rather
limited: Widelands text handling and layouting capabilities are rather limited
and there is currently no way to document other stuff than buildings. My first
project for now is to improve this situation.

Text layouting

Text layouting is a difficult problem, especially when we also consider
internationalization (just think chinese for example). I started out by
learning the various libraries that are around for this problem. There are two
contenders, the fist is Pango, the second harfbuzz.

Pango

Pango is the text layouting engine from the Gnome desktop environment. And it
is well suited for this task. Using Cairo for rendering and its own (limited)
markup language it is next to trivial to render a UTF-8 string with various
languages to a rectangular grid. This is not enough for what I want though:
Just think about an image inside a paragraph. Text should flow around it,
therefore it needs to be broken into a non rectangular shape. This is not easy with Pango:
It does provide a low level API to reflow text independently, but it is
awkward and one looses at lot of what Pango is good at (like transparent handling
of right-to-left text blocks).

Also, Pango does not allow to choose the TTF
font file to use for rendering directly, instead relying on fontconfig to find
a suitable system font. In Widelands, we would prefer to feed in our font
files ourselves - so we can be sure that it looks the same, no matter what
OS is used.

Pango is stable and used in a lot of products like Gnome. It is also what Wesnoth uses for most of
its rendering needs. But for the (cool) Wesnoth help, it is in fact not used,
likely because of the reflow problem I mentioned earlier.

Documentation for Pango is available in the form of a Doxygen output. This is
a nice reference, but hardly more useful than the source code itself. There is
also no easy tutorial that goes beyond the PangoLayout example that can only
render in rectangles. Especially, there is no example how to add custom markup
tags or how to do low(er) level layouting. The best I found is an old mailing
list post that explains the steps needed to layout text oneself. I was
able to stitch those hints together to a binary that does just that. But it is
really awkward and it does not offer a lot of the functionality that the
PangoLayout wrapper offers. I kinda feel this is a dead-end for now.

Harfbuzz

Harfbuzz is a slightly lower level API that is in fact used by Pango. It
allows to load TTF directly and offers more control over layouting. For
rendering it also uses Cairo. Problem here is that there is no stable version
- according to the homepage, the API might change till 1.0, the current
version is 0.6. And there is no docu. Nada!

I got around doing the final episode of my little screencast series on
UltiSnips. This time I talk about Python interpolation and give an example
of a snippet that really makes my life easier.

I always wanted to have a few screencasts showing off the main features of
UltiSnips but nobody was willing to take the time and effort to produce them. Now
that I've done it myself I can understand why: It takes a lot of time. From
start to finish - planing, recording, cutting, uploading and promoting one
screencast of roughly ten minutes takes approximately six to eight hours. It
was a lot of fun doing the casts though and I learned a lot. I am also
reasonably pleased with the final results.

If someone suggest a really good topic for a fifth screencast, I will consider
making one. But I feel that UltiSnips' main features are well covered now.
Thanks for sticking with me throughout the series!

I just release UltiSnips 2.0 together with a short screencast to run you
through the new features.

2.0 is a killer-feature release for me: Having normal mode editing and $VISUAL
support adds a lot of value to UltiSnips for me.

UltiSnips now has all the features I initially expected from a snippet plugin
when I started out writing it. It was not an easy road getting here but I am
proud what we have achieved. I consider it now mostly feature complete - I
will not object to good ideas for new features that enhance the user experience
but we will try as best as we can to avoid feature creep. UltiSnips
wants to be only one thing - the ultimate solution for snippets in Vim.

Last time I talked about the beautiful linear Algebra interpretation of
Least Squares which adds a wonderful geometric interpretation to the
analytical footwork we had to do before. This time I will borrow from a
video I saw on Khan Academy a long time ago. I connects some of the
elemental principles of statistics with regression.

The basic idea is similar to the analytical approach. But this time, we will
only try to fit a line to $$M$$ $$x$$/$$y$$ pairs. So, we
have a model of the form

\begin{equation*}y(x) = m x + c.\end{equation*}

The squared errors between model and data is then

\begin{align*}S := \sum_i \epsilon_i^2 &= \sum_i (m x_i + c - y_i)^2 \\
&= \sum_i m^2 x_i^2 + 2 c m x_i + c^2 - 2 m x_i y_i - 2 c y_i + y_i^2\end{align*}

Of course, we search for the global minima of this solution with respect to
the parameters $$m$$ and $$c$$. So let's find the derivations:

\begin{align*}0 \stackrel{!}{=} \frac{\partial S}{\partial m} &= \sum_i 2 m x_i^2 + 2 c x_i - 2 x_i y_i \\
0 \stackrel{!}{=} \frac{\partial S}{\partial c} &= \sum_i 2 m x_i + 2 c - 2 y_i\end{align*}

We can conveniently drop the 2 that appears in front of all terms.
We will now rewrite these equations by using the definition of the sample mean:

\begin{align*}0 \stackrel{!}{=} m M \overline{x^2} + M c \overline{x} - M \overline{x y} \\
0 \stackrel{!}{=} m M \overline{x} + M c - M \overline{y}\end{align*}

Let's loose the $$M$$ s and solve for $$m$$ and $$c$$.

\begin{align*}m &= \frac{\overline{x}\,\overline{y}- \overline{x y}}{\overline{x}^2 - \overline{x^2}} \\
c &= \overline{y} - m \overline{x}\end{align*}

If you look closely at the term for the slope $$m$$ you see that this is
actually just the covariance of $$x$$ and $$y$$ divided by the variance of $$x$$.
I find this very intriguing.

Second screencast is done and ready! If you have not yet seen the first,
look over here. This was done under the maxim Done is
better than perfect. I had more advanced explanations planed, but the
screencast turned out to be longer than expected. I therefore opted for a
complete overview over the most common features of UltiSnips and will explain
some cooler stuff in this blog post.

So here is the video. You can find some snippets with their explanation below.

Elements inside of default text

Placeholders can contain other UltiSnips elements. This is totally cool and
very useful. For example, the following snippet is defined for the html file
type and ships with UltiSnips:

snippet a
<a href="$1"${2: class="${3:link}"}>
$0
</a>
endsnippet

It allows to either delete the complete tabstop 2 with one keystroke. Or you
can jump to tabstop 3 and replace the CSS class name.

Another example:

snippet date
${2:${1:`date +%d`}.`date +%m`}.`date +%Y`
endsnippet

This snippet will enter the current date in the format that is used in
Germany: dd.mm.yyyy. Jumping forward will allow you to either overwrite the
day or the day and the month. I use this in my todo files to enter due
dates.

Complete Scripts in Shell Interpolation

Shell interpolation is as powerful as the shell itself. Here is a silly
example that is basically a complete Perl script:

snippetrandom`#!/usr/bin/perluse LWP::Simple qw(get);my $page = get "http://www.random.org/integers/?num=1&min=1&max=100&col=1&base=10&format=plain&rnd=new";print $page`endsnippet

This snippet will insert a truly random number between 1 and 100 fetched from
random.org. Useful? Maybe not, but a good example.

Transformations are more powerful than you might think

This transformation shows the transformation option g which means
global replace. It will uppercase every word in the first tabstop while
mirroring its content.

snippet title "Titelize in the Transformation"${1:a text}${1/\w+\s*/\u$0/g}
endsnippet

In the replacement part of this transformation, the u means "uppercase the
next character" and the $0 matches everything found. The complete
transformation basically says: find a word \w+ and any number of whitespace
\s* and replace them through what you found, but uppercase one char. The g
at the end makes sure that not only the first but all words are uppercased.

Conditional replacements

One feature that I didn't mention in the screencast are conditional
replacements. When you capture a group in the search part of a replacement,
you can check if it was matched in the replacement like so: (?<number>:yes text:no text).
yes text will be inserted if the group was matched, otherwise no text will
be inserted. A quick example:

The transformation will match a o at the beginning into group 1 or a t at
the beginning in group 2. The rest of the search string just matches
everything of the first tabstop. The replacement will either be ne when a
o was matched, wo when a t was matched or an empty string. That means
that you can just type a t to quickly get two as the complete content of
the snippet or a o to get one. If you type anything else, you only get the
content of the place holder, i.e. the verbatim of what you typed.

Inword triggers

The snippet option 'i' stands for inword trigger. Let's
say for some coding project, I often want to get variables that end on izer:
maperizer, vectorizer... I want to define a snippet with ri which should
expand to izer. Triggers are always matched to full words though - except
for when you use the i option for your snippet:

snippet ri "rizer" i
rizer
endsnippet

This does what I want: maperi<tab> -> maperizer.

Regular Expression triggers

Regular expression triggers can be used with the r option to a snippet. Note
that you have to add delimiters around your trigger when using regular
expression triggers. I use the following snippet because I can never remember
on which exact trigger I set my $$\textrm{\LaTeX{}}$$ chapter snippet:

This will expand on either cha, chap, chapt, chapte or chapter.
Regular Expression triggers are even more powerful because you can get access
the to match object inside your python interpolation blocks. More on that
in the next episode.

Regular expression triggers also allow to define triggers with special
characters, whitespace or multi word triggers.

snippet " this is a trigger""Multiword & Whitespace" r
Hello World
endsnippet

Conclusion

The features we discussed so far are enough to make crazy snippets and will
suffice for most uses. If you ever feel lost or forget how to use one of the
features, take a look into UltiSnips help document, either via

Next time we will discuss python interpolation which brings even more power to
snippets. With it, some of the features we discussed so far will become easier
to read with and others will only then become possible. With python
interpolation there is basically nothing that a snippet can't do.

As always, leave comments and let me know what you think about the screencast
and the tips. Also let me know what you are interested in and what you would
like to see in future screencasts. I have no more topics for a fourth
screencast, so if you want one, let me know what you want to see in it.

Overlapping windows are a bad invention. Frankly, the one UI that works are
virtual desktops with non overlapping windows (e.g. window managers).
Given, the world becomes sane slowly with iOS leading the way: every
application is full screen there.

On Mac OS X, the situation is dire: windows just open wherever they want to. I
wrote a python script a while ago that can be called quickly from Quicksilver
or LaunchBar which took a simple syntax to describe where you want your
window and moved it there. Problem: it was rather slow. It needed to run and
parse a command line tool to get number of screens and resolutions and it had
to ask for the frontmost application via AppleScript.

I got fed up with that yesterday and made it fast. Getting the frontmost
application and the number of screens and their resolution is now done via the
Carbon Framework in a C Extension. You can find the source code on GitHub,
download from there and use pip to install:

You want to call this from LaunchBar or Quicksilver, so install/make a proper
AppleScript (I ship one in contrib). The tool takes the following syntax:

move_window <screen_id><nr of x parts><x span><nr of y parts><y span>

Some examples

move_window 0# Disp 0, full Screen
move_window 02# Disp 0, left half of screen
move_window 030# Disp 0, left third of screen
move_window 031-2 # Disp 0, rightmost two thirds of screen
move_window 12121# Disp 1, bottom right quarter of screen

I finally got the technicalities sorted out, some inspiration and the time to
make a first screencast about UltiSnips. It is not so much about the specifics
in UltiSnips but rather an overview what snippets are and a demo reel of some
of UltiSnips features.

Let me know in the comments what you would like to see in future episodes!

Last time, we introduce the least squares problem and took a direct
approach to solving it. Basically we just minimized the $$y$$-distance
of a model to the data by varying the parameters of the model. Today,
we find a completely different, but very neat interpretation.

Suppose we have the following problem: There is a set of linear equations without a
solution, e.q. a matrix $$\mat{A}$$ and a vector $$\vec{b}$$ so
that no vector $$\vec{x}$$ exists that would solve the equation.

Bummer. What does that actually mean though? If we use every possible value of
$$\vec{x}$$ in this equation, we get essentially all linear combinations
of the column vectors of $$A$$. That is a span of vectors. That is a
subspace. So, $$\vec{b}$$ is not in our subspace, otherwise we would find
an $$\vec{x}$$ that would solve our linear equation.

Here is a visualization in 3D when A is a two dimensional matrix:

The column space of the matrix is a plane and $$b$$ is not in the plane. A natural question
now is: what is the vector in the column space of $$\mat{A}$$ that is the best approximation to $$b$$.
But what is a good approximation? Well, how about the Euclidian distance between the endpoints of the vectors.
If we choose this criteria, there is a simple solution: the projection of $$\vec{b}$$ to $$\mat{A}$$ will
give the best approximation. Here is another image:

We have decomposed $$\vec{b}$$ into its best approximation in
$$\mat{A}$$ which we call $$\vec{b_\parallel}$$ and a component that
is orthogonal to it called $$\vec{b_}\perp$$. At the end of the day, we
are interested in a best approximated solution to our original equation, so we
are not really interested in finding $$\vec{b}_\parallel$$, but rather an
$$\vec{x^*}$$ so that

Lets look for an equation for this $$\vec{x}^*$$. We will start out by
using that the two components of $$\vec{b}$$ are perpendicular to each
other. And we will also express $$\vec{b}_\perp$$ through vector addition.

We did the exact same thing as in the last post, but this time we took another
approach and quite a different world view on the problem. The least squares
solution came up very naturally here and with a very simple geometric
interpretation (at least in the 3d case, the approach is also valid for other
dimensions though).

The last iteration of this series will take a look at least squares in statistical
models.

Here we go again. After a long time of travelling and illness something new on
this blog. Today, I start a new mini series that takes a look at Least Squares
in various ways. I want to show how least squares arises naturally in various
areas - of course this will look at little similar all the time because it
stays the same principle; but the basic approach depends on the context.

Today we start with a simple example and the method I'll call the direct
approach.

The direct method

Suppose we have some kind of measurement process that measures every second
and we know that the system we measure follows a model of third order, that is
the values we expect are

\begin{equation*}m(t) = p_1 t^2 + p_2 t + p_3.\end{equation*}

But obviously, our measurement process is in some way noisy. Our aim is now to
find good values for $$p_1, p_2$$ and $$p_3$$. But what is good? Well,
why not start by minimizing the distance between our model prediction and the
measurement values?

In this plot we see our measurements as blue dots and we see an awful model
fit as red line. And we see various $$\epsilon$$ bars. Those are the
errors we want to minimize, the model prediction at a point $$t_i$$:
minus the value that we measured at this point $$v_i$$:

It seems like a bit of a stretch to write this simple equation in a matrix
notation there in the last step. But it will benefit us in a moment.

We do not want to minimize the straight sum of those because positive and
negative values might zero out. The absolute value is also not so easy because
our error function is then non-smooth... Let's summarize the sum of squares of the
individual errors:

Minimization is best when we have a quadratic error function because
there is only a single global minimum which can be found by setting the
gradient to zero.

The notation of this minimization becomes a bit tedious, that's why we go back
to the matrix notation we used earlier and realize that we can stack all
measurements we have into one matrix. Working with vectors will make the
derivatives easier to do:

Finding the gradient of S with respect to the parameter vector $$\vec{p}$$ is now easy. We also immediately
set this equal to the zero vector to find the minimum of this function:

Using this with the above example we get the following best fit

Conclusion

I call this the direct approach because we directly set out to minimize the
quantity and the least square formula pops out. There are other goals one can
set out to achieve which leads to the same formula which I will show in some
of the next posts. Hopefully it will not take as long again till I find some
time to post again.

Let's continue our discoveries in Chapter 7 of the Book by discussing
my solution to 7.P1.

We have talked the de-Casteljau algorithm before. It is an expensive
but numerically stable way to calculate the points on a Bézier curve given
it's control Polgon. Just to remember, this is how it is calculated:

Given the points $$\vec{b}_0 ... \vec{b}_n \in \mathbb{R}^m$$, $$t \in \mathbb{R}$$. Set

Then $$\vec{b}_0^n(t)$$ is the point with the parameter $$t$$ on the
corresponding Beziér curve.

Another common approach to interpolation is to use Polynoms of degree
$$n$$. We than also need some $$t_i \in \mathbb{R}$$ that
corresponds to the given $$\vec{b}_i$$, if they aren't given we can just
set them so that they are equally distributed betwenn $$[0,1]$$.

Aitken's Algorithm is than defined by

Given the points $$\vec{b}_0 ... \vec{b}_n \in \mathbb{R}^m$$, with corresponding $$t_i \in \mathbb{R}^n$$
and the position of the interpolation $$t \in \mathbb{R}$$. Set

Then $$\vec{b}_0^n(t)$$ is the point with the parameter $$t$$ on the
corresponding interpolating Polynom.

The geometric interpretation is the following: We start out with point 0 and 1
and interpolate linearly between them, than we take 1 and 2 and so on. In the
next step, we linearly interpolate between the linear interpolations of the
last round by blending the one into the other. The degree of the interpolation
polynom increses in each step by one.

The differences between de-Casteljau and Aitken's algorithm in each step are
quite simple: The first one is a mapping from $$[0,1] \to \mathbb{R}^m$$
while the second one is mapping from $$[t_i, t_{i+r}] \to \mathbb{R}^m$$.

We can provide a straight forward combination of those two algorithms by
mapping from the interval $$[ 0\alpha + (1-\alpha) t_i, 1 \alpha + (1-\alpha) t_{i+r} ]$$
with an $$\alpha \in \mathbb{R}_+$$, that is, we linearly interpolate
between the two intervals. Let's define a quick helper function:

Using this, the generalized algorithm looks like this

Given the points $$\vec{b}_0 ... \vec{b}_n \in \mathbb{R}^m$$, with corresponding $$t_i \in \mathbb{R}^n$$,
a blending factor $$\alpha \in \mathbb{R}$$ and the position of the
interpolation $$t \in \mathbb{R}$$. Set

Then $$\vec{b}_0^n(t)$$ is the point with the parameter $$t$$ on the
corresponding blend between Bézier and Polynomial interpolation.

The De-Casteljau algorithm is the special case for $$\alpha = 1$$, Aitken's Algorithm
is the special case for $$\alpha = 0$$. Here is a plot of a planar interpolation:

See how the curve for $$\alpha = 0$$ passes through all 4 points. This is
a property of Lagrange interpolation (polynomial interpolation). The curve for
$$\alpha = 1$$ is the only one that stays in the convex hull of the points;
this is a property of Bézier curves.

As always, you can find the complete code for this post in the GitHub
repository under chap07. Also, Farin has written a paper on this very topic.
You can find it here.

I just added a calendar widget on each page where it makes sense here. I am
not sure how useful it is, so I will try it out for a while and remove it
again if it makes no sense.

I also improved the blog archive list in the right navigation bar. I was
inspired by something I saw on Guido's blog, namely a folding tree view. I
am aware that most blogging tools like Wordpress have this feature build in,
but I realized the first time how useful it is on Guido's page. So I copied
it. Turned out that you only need a little bit of Javascript and CSS. I
basically followed the short but concise tutorial by David Herron. I only
needed very few adjustments, but they were trivial thanks to the beauty that
is jQuery.

Continuing with my reading efforts in Curves and Surfaces for GACD
by Gerald Farin, I will talk today about a topic that comes up
in chapter 6: elevating and reducing the degree of a Bézier curve.

Elevating the dimensionality

First, we start out by proving that the degree of a Bézier curve can always be
elevated without the curve being changed.

A curve $$\vec{x}(t)$$ with the control polygon
$$B = {\vec{b}_0, \ldots, \vec{b}_n}$$ can be represented as a Bézier curve
using the Bernstein Polynomials $$B_i^n$$ as basis:

We will now prove that we can increase the degree from $$n$$ to $$n+1$$
and still keep the same curve, given we choose then new control polygon
accordingly.

We start out by rewriting $$\vec{x}(t)$$ as

\begin{eqnarray*}\vec{x}(t) &=& (1-t) \vec{x}(t) + t \vec{x}(t) \\
&=& \sum_{i=0}^n (1-t) \vec{b}_i B_i^n (t) + \sum_{i=0}^n t \vec{b}_i B_i^n (t)\end{eqnarray*}

And now we show that the terms can be rewritten as a sum from 0 to n+1. Let's
start with the first term:

Now, we plug this into the summation formula. We can readjust the summation
variables cleverly by noting that we only add 0 terms and we get the formula:

Now all there is to do is to compare the coefficients for old control polygon
points and new polygon points and we have our solution: the new points are
linear interpolations of the old ones. We can find the new points $$\vec{B}^{(1)}$$ from the old
$$\vec{B}$$ by a simple Matrix multiplication:

The hard work is already done now: It only remains to say that we indeed find
the best approximation to our Bézier curve (in the least squares sense) by
finding the least-squares solution to the inverse of the system of equations given
in the Matrix equation. Therefore, we find the best degree reduced control
polygon by

And here is an example from inside Blender. The highlighted objects are the
reduced control polygon (n=4) and its Bézier curve. The non highlighted are the
originals with one degree higher (5).

Generators are iterators over things made up on runtime. For example if you
ask me about the Fibonacci sequence, I can generate an infinite amount of them
by only remembering the two numbers before:

Generators are quite useful for programming. Python has them, Haskell is
basically build around them (I think they call them infinite lists).
Generating the Fibonacci sequence up to a maximum number maxn looks like
this in Python:

After my last blog post about C++0x features, I was intrigued to write
something similar in C++. The corresponding main function also looks very nice
with the new container iterate feature:

So basically, we have a container here called FibonacciGenerator and we
iterate over it with the new container iterate feature. For this to work,
we have to implement the begin() and end() functions in the container
and the values those functions return must implement operator++, operator*
and operator!=. This goes something like this:

The implementation is quite straightforward: All the (not very heavy) lifting
is done in the generator. The inequality compare checks if the current number
is bigger than the one of the other iterator. If this becomes true, the loop
in main will terminate. The dereference operator returns the current number
and the increment operator calculates the next. The container itself only
contains the begin and end function which are called by the container
iterate syntactic sugar.

Note that generators are nothing new. You can write them in exactly the same
way in C++03. But somehow, the new Pythonic container iterate feature
triggered me for the first time to think about them in C++.