Archive
Contextual addition
I took the absence of complaint as silent consent and set forth to implement HyperWhatever
in associative subscripts. To do so I setup a little bit of tooling to lower cognitive load. When writing code I like to hit F1 in Vim and have it do the right thing depending on context. Since Vim is not self aware yet, we have to tell it what to do to help us. To specify context we can add a line to a source file to define a pseudo filetype.
use v6.*;
use Test;
# tests for the changes to Rakudo go here
# vim: ft=rakudotest
We can then define a filetype based mapping in ~/.vimrc
.
autocmd FileType rakudotest nmap <F1> :w<CR>:!./testcase %<CR>
Since I change Rakudo I need to run a local copy by forking on Github and cloning into a fresh directory. There I can place a small shellscript.
#! /bin/sh
test 0 -lt $(find src/ -newer install/bin/raku -iname '*.pm6' | wc -l)\
&& make clean all test install
install/bin/raku $1
With this little chain I can edit Rakudo files and then hit F1 in the file with the tests. Rakudo will be rebuild and the test file executed with the local Rakudo instance. The latter needs to be prepared with perl ./Configure.pl --gen-moar --gen-nqp
.
Since I wanted to put the new operators into v6.e
I had to hunt down the spots where Rakudo needs the information what to put where. The new operator candidate goes into its own file in src/core.e/
which then has to be made known in tools/templates/6.e/core_sources
. There are sanity test to avoid features to bleed from one version into another in t/02-rakudo/03-corekeys.t
. This provided me with some bafflement, as institutional knowledge tends to do. I was mostly guided by error messages, which shows how important LTA is.
A few days ago we had a discussion about the one argument rule, where I claimed to have little difficulties with it. While implementing %your-hash{**}:deepkv
I had to change my mind. The reason why the rule doesn’t bite me in practical code, is actually rooted in good testing.
my $seen-J;
my $seen-E;
for %hash{**}:deepkv -> @deepkey, $value {
$seen-J++ if @deepkey ~~ <J> && $value == 7;
$seen-E++ if @deepkey ~~ <A D E> && $value == 3;
}
is $seen-J, 1, 'seen leaf in {**}:deepkv';
is $seen-E, 1, 'seen deep leaf in {**}:deepkv';
Here the destructuring of the return value of %hash{**}:deepkv
only works when the operator returns exactly the right thing.
multi sub postcircumfix:<{ }>( \SELF, HyperWhatever, :$deepkv!, *%other ) is raw {
sub recurse(\v, @keys){
if v ~~ Associative {
for v.kv -> \k, \v {
recurse v, [@keys.Slip, slip k]
}
}else{
take slip(@keys, v)
}
}
gather for SELF.kv -> \k, \v {
if v ~~ Associative {
recurse(v, [k])
} else {
take slip([k], v)
}
};
}
Getting the right amount of Slip
into the right places took half an hour (building Rakudo takes about 60s) and plenty of cursing. I hope this is a case of torturing the implementer on behalf of the user. When using test driven code that seems to happen automatically. I’m to lazy too write a fancy test, so I have the testee to be clever enough to satisfy it.
Since I couldn’t find any documentation on how to add features in specific language versions, I hope this to be helpful to those who seek the same.
The next fast thing
A few commits ago lizmat taught next
to take an argument. I started to play with this and found that not all loops are created equal.
sub prefix:<♥>(&c) {
LEAVE say (now - ENTER now) ~ 's'; # Don't you ♥ Raku?
c
}
♥ { say sum gather for ^1_000_000 { .take if .is-prime; } } # 1.399831818s
♥ { say sum eager for ^1_000_000 { .&next if .is-prime; } } # 1.131352526s
♥ { say sum do for ^1_000_000 { .&next if .is-prime; } } # 1.60557427s
♥ { say sum (^1_000_000).grep: *.is-prime; } # 0.778440528s
I’m surprised that the do for
-form is the slowest. That gather is slower leaves the question if a for
-loop is the better way to create a lazy list.
my \a = lazy gather for ^100_000_000 { .take if .is-prime; }
my \b = lazy for ^100_000_000 { .&next if .is-prime; }
♥ { say sum a[^1_000_000]; } # 25.91494395s
♥ { say sum b[^1_000_000]; } # 26.521749639s
Optimising is an art of doing the same thing over and over again. There seems to be room for optimising things that do things over and over again.