My JavaScript book is out! Don't miss the opportunity to upgrade your beginner or average dev skills.

Thursday, November 21, 2013

Is Google Dart Death ?

Update actually it's now an ECMA Standard after SDK 1.0 release
This year more than ever, the Chrome Dev Summit event has been full of freaking interesting talks ... except one ... you guess which one ...
Despite the sensationalism of the chosen title, and just to be clear this is about my personal thoughts on such language (or better the way it keeps being introduced) so the trophy for the less interesting talk today goes to the ... drumroll ... Dart one.
Please note that in this post I am not judging speakers ability to either give a talk or their amazing programming skills, I am judging the content and the message to all developers like me.

Update on day 2 there is a Dart interview that eventually tells us more about what Dart is and how it can make web development better. Now this is what I'd like to know/see/hear and thank for, way better than how it has been re-introduced on day one. Back to the post, this was day 1

Where Are Real Achievements ?

I honestly wasn't expecting to hear again all JavaScript problems we learned and we know. At release 1.0 of the Dart language, I was expecting a talk about all the amazing things that Dart can do these days or improved 'till now ... instead, the whole talk was something like:
Dart does not have undefined as JavaScript does ... how cool is that?
Dart has classes, with scriptish syntax, much better than JavaScript ... how cool is that?
Dart has no coercion (aka easy casting through explicit + or ==) ... how cool is that? (n.d. is it?)
Dart has operators overload ... how cool is that? (n.d. so Dart has an extra problem JavaScript hadn't ...)
So now that the everyone got how shitty looks JavaScript, let's go to how Dart works ...

Dart Desugars To JavaScript

So please cut all this crap and start respecting the language able to show all your features because I'm really fed up of all these battles against JavaScript, when once again this language has demonstrated, with all its undefined "problems" thanks gosh no real developer has anymore since 2004, to be very flexible for the modern Web, coming from 14 years ago ... do you get it?
It does not matter how cool is your new language if everything it does can be done as well with JavaScript in all platforms that do not have a native Dart VM ... so just Respect The JavaScirpt and get over it: Dart lost before even trying!

Even Google Seems To Know It

The last talk of today, probably the one with less audience being late in the afternoon (but there were still many developers in that room), was about bringing native performance to the browser through Google Native Client.
Bear in mind, we are talking about native, not yet another VM that won't be able to do the impressive work that Emscripten did and asm.js is doing as well, both based on JavaScript as the target (so go home Dart, you are late to the party!)
The worst part of this post is that I don't even know anything about Dart and how great this could be on my daily basis job because all I know is that JavaScript, the language that I know the most, has coercion and an apparently infamous undefined value I can forget through the undefined == null coercion (problem solved Dart)

Be Polite And Respect JS Or Leave The House

I know I've historically been not so keen about Dart, but waiting a year to see progresses, waiting the final 1.0 release, and hear what I've heard today and from people that worked on V8 too, really got me thinking people are still wondering what the hell is all this about.
When you go at Ruby, Python, even PHP conferences, you hear about how great is any of these programming languages.
You don't need to tell everyone that JavaScript in IE8 is crap, if you are attending a scala conference, you know what I mean?
Even more disturbing, You cannot logically blame the language that makes your language work cross platform ... it's like blaming Assembler for making programming possible ... are you out of your mind?
I think if this is all I can learn about Dart V1.0, specially with ES6 and new Class syntax behind the corner already possible with transpilers, I rather would like to know it's gone ... they tried, but Google Native Client was a better possibility (and I have thoughts about that too ... a talk full of "we were in a plugin era" that talks about yet another browser plugin, even if integrated as Flash has always been in Chrome).
These are fun times Google, I think you are doing great stuff since ever ... but Dart .. show its real muscles or stop wasting ultra skilled developers on something nobody wants and nobody needs, thank you!

Tuesday, November 19, 2013

The Untold Awful Truth Behind Retina

I put in place a very simple test page made out of circles
  • first one is in svg without touching its ratio
  • second one is a canvas with anti aliases, if any, and with same svg size multiplied its ratio
  • third one, an image out of the previous canvas
  • forth one is just same canvas without anti alias, if any
At the end, little one in the left corner, the current used ratio the moment you see the page, out of my display script logic.
Discarding basically second and third circle, here the untold story about retina.


Above is the difference within SVG and same image with 2X ratio on a Nexus 5.

2X Images Aren't Enough

The fact Apple keeps telling developers "just use 2X images and you gonna be OK" does not mean it's OK. The iPad mini retina, together with the iPod retina, and I expect all others, are actually trapped behind this mythologic 2X.
That's correct, the border of both svg and aliased canvas circles in iPad retina look the same. What's the issue in here? That in 326 pixels per inches the quality cannot be that 2X same of 264 ppi or just 220 in case of MBP so basically the display is capable of more, but the software (or trapped hardware?) won't show you any difference.
How can I tell? If you check circle borders you can tell it too!

Nexus 5 3X Isn't Enough

If you check the border of both svg and aliased canvas circles again but with your Nexus 5, you'll notice that while SVG is very sharp and perfectly aliased, the canvas 3X version is not.
In order to obtain a similar result you might want to scale up the ratio a little bit, let's say 4X ... you see? with this latest link the canvas is almost as sharp as the SVG. Still not a perfect ratio with 445 pixels per inches.

Those Bad Boys

So the honest Blackberry 10 with its 2.2437500953674316 (!) ratio looks actually sharper once forced at 2X with less artifacts visible on such good display.
Android 2.X on Samsung Galaxy Y with its 0.75 ratio looks smooth and antialiased enough.
The least considered device, at least in the US, aka Windows Phone that has a ratio of 1.5 since version 7 devices, looks very sharp! It is possible to compare with blurry circle perimeter forcing ratio at 1X ... that's how every logo and/or graphics is looking these days in Windows Phone: crap, because most likely you, your framework, or your CDN, is serving 1X images thinking that's enough for that screen. I tell you there is already a huge different, you should try it!

This Is MA MA MA MA MA MA MA MA Mad Mad Madness

The last project I am working on some spare time over the week end ended up with some code like this:
function createShortcutIcon(iconCreator) {
  for(var
    link1, link2,
    // used to generate icons at runtime
    canvas = document.createElement('canvas'),
    // where to place links
    fragment = document.createDocumentFragment(),
    sizes = [
      30, 57, 60, 72, 114, 128, 144, 173, 196
      // completely random , hopefully future proof, entries
      , 214, 256
      // btw, the whole sizes and 2X idea is so wrong ...
    ],
    i = 0; i < sizes.length; i++
  ) {
    link1 = document.createElement('link');
    link2 = document.createElement('link');
    link1.rel = 'shortcut icon';
    link2.rel = 'apple-touch-icon';
    link1.type = link2.type = 'image/png';
    link1.href = link2.href = iconCreator(
      canvas, sizes[i], '#E6A72A', '#286868'
    ).toDataURL();
    link1.setAttribute('sizes', link1.sizes = sizes[i] + 'x' + sizes[i]);
    link2.setAttribute('sizes', link1.sizes);
    fragment.appendChild(link1);
    fragment.appendChild(link2);
  }
  (document.head ||
   document.querySelector('head')
  ).appendChild(fragment);
}
Above non-sense creates every sharp icon for Home Screen saving in every bloody device I have to deal with, where it does not make sense to talk pixels when we cannot know dynamically/upfront how many we need.

The FirefoxOS Joke

There is a manifest validator, which is nice, that tells you that the icon supposed to be 60x60, or 90x90, but maybe 120x120 is good too, and if you really want to cover everything, provide 256x256, 128x128, 64x64, 32x32 and 16x16 ... are you mo$#^*@&^#&*^fu&#&*!^%$#&@ng kidding me?

An Epic Fail, IMHO

This part of moving the web forward, including the inline CSS data URI that unfortunately cannot scale as well with all sort of devices pixels ratio, unless you don't inline SVG directly, took a very weird direction where nobody knows anymroe what to do to make an image, an icon, looks good across all platforms.
Raster Images Are Death, IMHO, and we should provide maximum 3 sizes for these, as it is already for stremed video: from the very-high quality, to the medium, and low one for those that have to pay for bandwidth but holy crap there's no way even Akamai will ever provide the right ratio for every device so why on bloody earth we should write stuff like this?
<img srcset="
  320.jpg .89x 400w, 480.jpg 1.33x 400w, 640.jpg 1.78x 400w,
  480.jpg 1.04x 520w, 640.jpg 1.39x 520w, 960.jpg 2.09x 520w,
  640.jpg 1.1x 639w, 960.jpg 1.66x 639w, 1280 2.2x 639w,
  320.jpg 0.89x 800w, 480.jpg 1.33x 800w, 640.jpg 1.78x 800w,
  480.jpg 1.09x 959w, 640.jpg 1.45x 959w, 960.jpg 2.18x 959w,
  320.jpg 0.89x 1200w, 480.jpg 1.33x 1200w, 640.jpg 1.78x 1200w,
  480.jpg 1.09x 1440w, 640.jpg 1.45x 1440w, 960.jpg 2.18x 1440w,
  480.jpg 0.86x 1920w, 640.jpg 1.14x 1920w, 960.jpg 1.71x 1920w, 1280 2.29x 1920w,
  640.jpg 0.86x, 960.jpg 1.29x, 1280 1.71x, 1920 2.57x
">
This is not me speculating, Mr. Tab Atkins Jr wrote a post about this and he is one of those guys that are trying to make good standards for everybody future ... do you see the problem I am talking about now?

Solution

I wish I had a silver bullet for this, unfortunately nobody can tell hardware and screen developers that they should stick with few predefined resolutions: they are competing with each other to put as many pixels as they can in a smaller part to make it sharper ... and while somebody complained our eyes cannot even spot the difference, I have created the circle test that is showing to my eyes that circles are not so sharp and crisp yet, but I don't think the web will ever go far using raster images, no matter how optimized are these since these do not scale.
SVG or make it vectorial All The Things is my current mantra ... that does not require a new manifest for FirefoxOS per each new device with a better screen in the near future.

Wednesday, November 13, 2013

A Not So Geek Person On A Nexus 5

The first day it was announced, I've ordered a Nexus 5 due its value for money ratio, probably one of the best out there, its Android KitKat support, and on top of everything, due its Google Chrome first approach when it comes to mobile browsing, and as you know I do mobile web development since quite a while so that's something I should test/take care: thanks gosh the random WebKit stock browser era is ending!
Anyway ...

This Is Not About Me

Another person got this phone for another reason: to improve a forgotten Sony Ericsson Xperia Arc S on Android 4.0.4 experience, a phone so stupid it cannot even upgrade its own preinstalled apps thanks to its inability to use the external SD card as storage for preinstalled software such Google Maps, Google Play Store, Google Whatever. This is not about me, then not about somebody that knew what was going to buy, going to appreciate as update, going to use it at its best ... this is a post on behalf of the person that received this Nexus 5 phone, a person that I rather don't even mention because it does not matter: she is just a user, and not a geek one!

First Contact

Now, the very first reaction was something like:
OMG it's huge!
Well, that's actually a valid point. Phones are getting too big, imho. Phones should be phones, this trend where phones are getting closer to mini tablets ... I honestly never got it. I want to be able to put my phone in my pockes, any pockets, as I was very happy doing that when phones like this were the best option ever.

More on the first day, I've insisted to give it a chance, 'cause it's going to work, and it's going to be better, faster, easier, stronger ... and the whole Daft Punk repertory.
And so she did ...

Few More Days

Few concerns came up, such "where are my contacts?" or simpler "where are SMS?" or stuff like "why is my battery lasting 1/3 of the old phone?" and reasonable questions like these.
I mean, I have an answer for all these questions but why a normal user should be so disoriented? Battery a part, how can the old Android SMS icon disappear like that so that nobody would click it and re-learn that hangout is now the new SMS manager? And still ...
"I don't want Google to know all about my SMS".
Specially in these days where NSA story is under everybody nose and were common users keep getting scared and aware about security every day more, and I am lucky I just know all this stuff and somehow still manage to sleep during the night, remembering to never write my plans to conquer the world via web or mobile phone data, I didn't even feel the need to blame her or reply. I mean .... right? A single software knows everything I write and receive? That's scary! How can I even start explaining that is basically what happens since the beginning of the smart phone era and even before that it was basically possible/the same?
How to even start a conversation such: "Hey, that's just an aggregator app. You still send SMS when it's about SMS and Web or MMS when it's about those things, don't panic!"
I don't indeed, and keep observing the situation ... until today ...

The Straw That Broke The Camel's Back

I am just quoting here:
I start thinking I want my old phone back.
I have enough of Google in the middle of everything I do.
I had to quickly save a mobile phone number, this phone asked me so many things I threw it on the table and grabbed pen and paper to write this number down!
Moreover, I don't want to re-login to a gmail account for everything I do with my life!
I don't know what you think about this, but this is actually another valid point I give for granted. I know Google effort to simplify users life is great, but also there are limits where asking, and making something once an easy step more complicated, because of "features" somebody may not even desire, isn't exactly UX progress.
This happened to me too, and many times, where technology that supposes to make my life easier got so selfish that made impossible for me to get a result and, as result, I had to drop such technology.
An alarm that forgets to wake you up is pointless, you know what I mean ...
We already reached that regression where either your phone is on or your alarm will never wake you up, as it has been for all devices "before smart phones" generation ... but right now ...
A phone, which historically has an easy to access contact list, should never be in the middle and ask which email or gmail account to use to store a contact, it should do that eventually after the operation and batched somehow, to be non blocking for the user, but still clever for the next time.

Last On G+

To have access to Apple, Facebook, Linked In, and any other social network, you don't need a Gmail account. To have access to Google Plus you do need a Gmail Account. I mean, that's actually fair from Google perspective, but not so cheap or easy for people that use the same email since ever and are not planning to embrace Gmail any time soon.
That's how you could reach everyone on this planet, letting them use everyday tools and stuff they are used to, still offering great services as Google Plus is.
That was actually the latest, most disappointing complain I've heard, about this "all you can Google" phone that did not match the daily life of somebody outside "that circle".

As Summary

I still believe the Nexus 5 is the best Android phone I've personally ever seen and used, but I have quite a background about Android since version 1.5, and I believe we, in IT, should never forget that some people don't like to change a phone each year, some people don't like to change anything at all, but even more adventures prone people, might want to keep having simple tasks simple, not a charger with them all the time, and no need to buy clothes with bigger pockets, as well as being able to just save a bloody contact in few steps: now that's progress, right?

P.S. I am still quite happy with my Lumia 620, I don't think there's a phone out there that could make me happier right now, but this is another story I am planning to tell you soon.
Stay tuned ;-)

Sunday, November 03, 2013

dumb.min

This is a needed explanation of a tweet of mine that has been mostly and instantly mistaken (just read through all comments):
this keeps bugging me: can we all please stop putting .min suffix when it comes to save bytes and use .max instead when size doesn't matter?
I've already posted about this a while ago but haven't seen much feedback/changes after that so here again trying to explain this matter only.
As spoiler, please don't take it personal or get offended by the abused dumb word, I am the first one that used and probably still use somewhere the .min suffix, aright?

Logically Speaking

So here the thing: we all use automations in order to create minified version of our files and if we don't we should. There are tons of easy to use tools out there, repositories such gitstrap that works with Makefile or grunt that could watch all files and keep building for us ... pick an option, create your own one, do whatever, and at the end create the build folder.

The dumb.min.js Output

Since we use automations, there's no reason on earth to provide to our users 4 extra bytes for a script, css, anything minified that is there to be served as fast as possible.
I mean ... the purpose of minification is to serve the minimum possible amount of bytes per each request, right? Where even headers and file names matter per every single network request, right?
Then .min is a name for developers that want to feel cool they are serving the minified version, throwing 4 extra bytes to every user, in every page, and for everything that uses such suffix conventions ... for who's supposed to be convenient?
The user? No! The bandwidth? No! The semantic? ...

dumb semantic

You now open any website that uses a CDN to load a common library. Let's choose jQuery, one of the most popular, right?
You go in the download page and there is no link to any CDN that does not end up with .min ... now ask yourself: who is that dumb that would ever serve the full, uncompressed, non gzipped/deflated version, of any CDN library? Develoeprs to debug? Good, they are 1% of the traffic of that website ... so is this how we do things? Penalizing the default use case in favor of dumb laziness?
What do you expect from a CDN in first place if not the best available and performant options to serve the file?

... and mootools-yui-compressed.js ...

When I've read this page I thought "this must be a joke".
Seriously, check all that Google CDN list. Actually, what does not contain .min is really not minified at all! There are developers out there serving Prototype 1.7 with comments and moreover, the gold medal for the best anti-pattern naming convention for a minified file goes to that mootools one. How about we put the date, and the list of all authors and the license name of both YUI and MooTools in there?</sarcasm>
140byt.es offers entire scripts in a tweet size, and we think naming something with 15 extra pointless-for-purpose chars as -yui-compressed is good for anyone out there?
Bear in mind this is not a real/concrete issue in using MooTools library itself, as explained later on, but a matter of logic that once applied everywhere, makes things look weird (aka: "something went terribly wrong at some point on the internet").
In this latter case, even automations could be improved and use -yc or something else if really needed (as if the tool used means different library behavior ... hopefully no, otherwise drop that tool!)

... plus min.map.js ...

If it was about having a convention able to tell the browser where to look for the file I would re-thinnk this ... but as you can see in Microsoft CDN all files need to have the equivalently named one with the map suffix so once again, who's gonna benefit from this dumb convention somebody created one day and nobody ever thought it wasn't that clever?

Quoting @craigpatik
...min.js, ...min.css, ...min.svg — one of those pieces has useful meaning; the other is redundant.

Not CDNs Fault

Absolutely not, it's entirely developers fault because their automation most likely will create those files with those names.
Still using jQuery as example, but many libraries are like this, the task to create distribution files is like:
distpaths = [
  "dist/jquery.js",
  "dist/jquery.min.map",
  "dist/jquery.min.js"
]
How utterly simple would be to have any of these variant instead ?
distpaths = [
  "dist/jquery.max.js",
  "dist/jquery.map",
  "dist/jquery.js"
]

// or ...
distpaths = [
  "dist/jquery.source.js",
  "dist/jquery.map",
  "dist/jquery.js"
]

// or ...
distpaths = [
  "dist/jquery.src.js",
  "dist/jquery.map",
  "dist/jquery.js"
]

// or ...
distpaths = [
  "dist/jquery.yo-mama-can-build-scripts-too.js",
  "dist/jquery.map",
  "dist/jquery.js"
]

What Will We Gain Doing So

So, it's not that suddenly anything will go twice as fast or any user will ever notice that, it's just a pure semantic, logic, sense sake and that feeling of "doing it right" that might show new comers on the web that we are not so silly that to serve less bytes we automatically put 4 extra on top just because ... no: we discussed about it, we thought about it, and we choose the best option available not going too far with extreme non-sense optimization as "dropping the whole name" would be, simply applying common sense and logic to the chosen pattern.

Otherwise, it's like thinking that to distribute an executable we need to use program.compiled.exe as name ... we don't, we know that's already the compiled version of an executable, same is for served files, in 2013 they should be by default optimized, and the source one, eventually available for developers purpose.

Once again, not a big deal. We can all sleep during the night keeping things in this way but every time you'll setup a build script again, maybe you'll drop that dumb repeated .min and do things logically in every single layer of your stack.

a bit of Math

Here I've quickly found some jQuery CDN related stats and I am pretty sure the amount of website using this library is more than "just 16000" but if we take that and multiply by 4, we have 64000 pointless extra bytes for jQuery only. Now, assuming every site will have some extra plugin or script, and some extra CSS, let's say a minimum of 4 total files, we'll have a minimum of 256000 pointless bytes on the air nobody needs, wants, uses. If we multiply these bytes for the extra space a name could take in any filesystem, the extra 4 bytes every single CDN request url will transport ... etc etc ... you see?

As conclusion, and even if 4 bytes aren't an issue in a singular case, can we all agree that if the whole internet uses .min in every (probably minified too) HTML file to refer external resources, there is an absurdly redundant amount of pointless data which aim is concretely unknown for everybody?
Thanks for your time and your possible, future, collaboration.

Wednesday, October 30, 2013

My Dreamed Developer Board

It looks like every single developer board out there lacks something and still not a single one, not the Raspberry Pi, not the Beaglebone Black, literally none of them, has a not great, not good, even decent GPU support. Update here a complementary post form an Open Source drivers Developer.

The Open Source World VS any GPU

It's unbelievable how many Open Source Hardware and CPUs producers there are out there and with ARM in first place, but even same producers do not ever let Linux fellas develop for them ... free men/women work they don't want because keeping their little messy driver and schema secret or protected via some NDA is the rude reality we face since basically ever. Only Intel seems to have open source drivers, but not open source schematics, and I am looking forward to their new embedded effort starting with Galileo on Arduino but still, that will be a huge step forward for the OS community, and still a wall to break for the community itself.

A Bold Approach via Kickstarter

Sir Francis Bruno and his team went out with a Complete Verilog implementation of a 2D/ 3D graphics processor capable of OpenGL and D3D w/ full test suite idea, based on some hardware they've been working since a while and 20+ years of experience on top of GPU and Graphic Cards.
Their mission and idea has been described in this post from Brian Benchoff too and I could not find rather hilarious the amount of free complaining their goal received.
So here the deal, with $200.000 USD they'll finalize, cleanup, and put in the Open Source world an embeddable 2D graphic card which is what most Linux users would love to have in order to fix all flicking, bad reverse engineering, missing updates, or slow rendering in their desktop environment ... but they complain it's too much ...
For $400.000 they'll release a full OpenGL and Direct 3D ... look, I don't even know where to start building such thing so I am the last one to judge this goal but regardless, is not that we ever had an option, right? Open Source does not mean these guys will put themselves down for few bucks without eating, testing, thinking about their families just for the cause, right? So once again, no idea how people could complain about this.
For $600.000 USD they promise performance improvements, something that will come regardless once everything is Open Source and available to the community, right ? Last, but not least, with a $1.000.000 USD they'll go further releasing a universal shader version of the whole refactored package:
This is our ultimate stretch goal and requires a complete redesign. It's something we have been wanting to do for years, but didn't have the resources. This would allow us to create a complete open source implementation of a modern day graphics accelerator. If we receive more than the above, it will allow us to devote more time and effort to the project and we'll be able to release code sooner.

I Have Backed Them!

And Kudos, because if until now nobody has ever done this is probably because:
  • it is actually not simple at all, I'd love to read all complains about the stretch goal but also see any of them doing this for less!
  • it does not pay back once everyone else could improve without royalties
  • it requires a huge amount of time (they say Q2 2015, I think that's optimistic)
  • it's like putting themselves under all Open Source world reflectors where they cannot fail, if the goal is reached, or they probably won't find a job anymore in this field
  • more I have no competences to even judge
Last thought on that would be: isn't Kickstarter excellent exactly for these kind of goals where alone is not possible due amount of resource and time required?
I'd rather bake more projects that make the real difference instead of yet another key-ring with a led light or we won't make huge progresses in the Open Source, and its embedded, scenario ... too much penalized, in my opinion, by this, and only this, HUGE gap that nobody is trying to close (or even worst, judged if trying).
I really hope they'll reach the ultimate goal, a real mini computer, with graphics and games included, with linux on top, and a fully open source hardware acceleration: still a myth at the end of 2013, only Android and its binaries are somehow working ... not Linux, no drivers and no interest for it.

Which Board Do I Dream

Well, these are features I'd buy without even thinking, for a reasonable price:
  • Dual Core (or more) CPU with a decent amount of cache and at least 1 Ghz clock, as it is for the A20 or greater
  • 2GB (or more) DDR at least @ 400Mhz as it is for the Utilite
  • 2GB (or more) Nand as it is for pcDuino
  • GPU with OpenGL ES 3.0 and OpenCL with 64MB (or more) of RAM
  • an SD card compatible with at least 32GB SD class 10 (or better) as the BBB or any other
  • an HDMI output (audio on it) as many current boards
  • a small factor size as the MarsBoard, BBB, or RPi
  • en Ethernet, as many of this
  • a Wi-Fi as the Arduino Yun has already (Bluetooth would be a plus)
These are in my opinion the bare essential to have a competitive Desktop/Server embedded system capable of much more than just watch TV or switch on/off some led but irony wants that nt a single board out there has these features all combined.
Today, such board would cost around $70, as it was the first Beaglebone at the beginning, and it was a success!
Intel, I am looking at you, hoping those GPU drivers, will be Open Source too ;-)

Sunday, October 06, 2013

Did Arduino Yún Win Them All ?

While I am still excited about Tessel release date, I've received this Arduino Yun beauty a couple of days ago and I must say this is a pretty damn good little board!

Zero Experience With Arduino? No Problems

This is the very first Arduino board for me and I've always been a bit skeptical about these boards VS Raspberry Pi, Cubieboard, BeagleBone Black, and friends potentials.
"Let's be honest, it's cool to switch on and off some lightbulb", was repeating the little voice inside my head (OMG, I hear voices!!!!), "but how much more I could do with a fully powered Linux board", was the answer from voice at analogOutput(PIN_2, 13) (jeeeeez, I'm done here!).
Anyway, the point in this paragraph is about how easy it has been to start from the scratch to have a complete software up and running via WiFi ... let's talk about that later on ...

Not Just Sketches!

400Mhz aren't bad at all for most basic common tasks, neither are 64MB of RAM. So the raw power is already higher than tessel plus the microSD card is integrated and as soon as you put your super fast class 10 or higher card in there, a new world of possibilities become instantly available even if not directly integrated with the OS.

Connecting Arduino To A SQLite3 Database

You read correctly, and probably everyone else out there has already done this. I have to admit I didn't even google for this, it has simply been conceptually the very first problem to solve.
Interoperate with Linux behind the Arduino capabilities is one thing, keep tracking, updating, exchanging data about anything between these two systems and independently, brings them in a new level of possibilities.

A Tail Of SQLite3 Wrapping Experience

One of my latest projects which is targeting every OS but is tested mainly in embedded Linux boards is called dblite and it conceptually does with a higher level of possibilities, what this sketch does too: connect to an sqlite3 database through the sqlite3-cli interface and do everything that would possible to do directly in there.
The difference between this Arduino Yun approach and dblite is that here the equivalent of nodejs spawn is performed per every single SQL statement and not once per application lifecycle.

20 Minutes Plus Some Remote Testing

Yeah, the little IDE that comes with Arduino is full of examples too and most of them specific for the Yun.
This has been the easiest, fastest, learning curve ever for something so hard to do a while ago!
Massive kudos for the team, including all connational chaps working on it, too bad this Yun is the first board not fully made in Italy though :D

What's Not So Easy Yet

Latest version of Linano could be built with a MIPS compatible gcc so that it is possible to build directly through the board some software even if it will probably take forever in bigger apps.
It is also quite painful to rebuild the whole OS instead of having some incremental update possibility directly via the package manager and all you can do right now is cross compile hoping that after 5 hours you won't have an error.
I am using an Atom based Ubuntu x64 netbook for this task so maybe that's why it takes forever, however it never took that long to prepare a linux distribution for other embedded boards ... I hope things will become easier to install, build in the board, or update thanks to the constant effort from OpenWrt community (or maybe ... some link that points me on how to do these things in a way that works? :P)

Have fun with Arduino!

Friday, October 04, 2013

Web Development Has Never Been So Beautiful !

In an era where iOS7 does many steps backward while Chrome adopt a full Home Screen App like solution, it's kinda normal to be confused about the current status of Web Development.
I won't hold it any further and tell you it's actually in the best shape ever, a situation rarely seen in the past and probably not so nice in the near feature where massive JavaScript changes will land in some ever green browser.

The DOM Is Awesome !

Developers still learn and code as if IE6 is the only browser to take care of ... well, we are lucky enough today the situation is not that anymore so that IE8 is the latest browser we might want to deal with, if we really care about those people stuck behind such jurassic browser.

DOM Level 4 For Every Browser

Not kidding here, the latest DOM Level 4 API is absolutely nice:
  • events and custom events with bubbling work as expected
  • patterns like handleEvent, a universal fast and cross browser solution to callbacks hell and hundreds of pointless bind in your code, is available for any sort of browser or event
  • dispatching events has never been so easy, with possible extra detail to carry on through the bubbling process via el.dispatchEvent(new CustomEvent('type', {bubbles:true, detail:object}))
I could talk about every single new feature that makes the DOM L4 awesome and the good news is that everything has been tested in every browser and it works, included the infamous Internet Explorer 8, thanks to this repo!

ES5 Is Everywhere

Probably not fully implemented as specs say, ES5 is in any case in every browser we can think of these days.
It was already almost fully implemented in webOS, it works like a charm in IE9, it's in every ever green browser plus it has been early adopted almost in every Mobile Browser, from iOS 5.1, the latest update available for old iPad and iPhones, through Android 2.3 and others. Seriously, even Nokia ASHA Xpress Browser scores well when it comes to ES5 support!

ES5 Can Fix The DOM Too

Thanks to its wide adoption and thanks to the fact every ES5 capable browser knows DOM Element prototypes too, included IE8 and its DOM only Object.defineProperty() functionality, the DOM can be fixed in almost all its missing parts.

This is how it was possible, as example, to implement classList and DOMTokenList behavior in both IE8 and iOS 5 or Nokia ASHA browser, where the early implementation does not accept multiple arguments and does not respect standards for toggle method.

Not Only The DOM

The current status of the web is that everything can be reasonably fixed with available tools. This is the best thing ever happened so far and all goes to the decision made in 2008 to abandon specifications that were breaking the web due different syntax.
At that time the decision was to extend ES specs or fix them with what was possible/available at that time.
This is paying back now, after 5 years, where almost everything can be normalized through amazing libraries such es5-shim, including its sham that fixes other aspects of ES5 specs in a reasonable way.

Everything Is Working Now

It sounds overall optimistic but green is what you should see once you click this link, followed by this one.
About 140 Tests Of Awesomeness I am going to explain to you after introducing yet another repository ...

The Dreamerplate

Similar to a boilerplate, the dreamerplate is a repo that includes all the needed basics to have an homogeneous environment across all modern and jurassic browsers you can imagine surfing the internet these days.
From IE8 to anything else that came after, you can try the base testing page in console, dreaming about an environment where you can:
// use de-facto standards to define methods
document.body.on('click', function (e) {
  this.append(' ' + e.timeStamp);
});

// give for grant all you need is there
window.on(
  'unload',
  console.log.bind(console, 'bye bye')
);

// use widely adopted and future proof standards
// instead of needing this or that utility
window.dispatchEvent(
  new CustomEvent('unload')
);

// or simply go the way you are used to
window.trigger('unload');

// or, if coming from node ...
window.emit('unload');

// use a better DOM approach avoiding
// memory leaks and the need of WeakMaps
// or the need to bind everything
[
  'mousedown',
  'mousemove',
  'mouseup'
].forEach(function (type) {
  document.documentElement.on(type, this);
}, {
  handleEvent: function (e) {
    this['on' + e.type](e);
  },
  onmousedown: function (e) {
    // this is the object, not the node
    this.dragging = true;
  },
  onmousemove: function (e) {
    if (this.dragging) {
      document.body.append([
        e.pageX, e.pageY
      ].join(', '));
      document.body.append(
        document.createElement('br')
      );
    }
  },
  onmouseup: function (e) {
    this.dragging = false;
  }
});

// use window timers as they are meant to be used
// with extra arguments capabilities, i.e.
setInterval(
  requestAnimationFrame, // * not normalized in dreamerplate yet
  1000 / 30, // Fixed 30 FPS
  function () {
    // the expensive callback to optimize
    var marginLeft = parseFloat(document.body.style.marginLeft) || 0;
    document.body.style.marginLeft = (marginLeft + 1) + 'px';
  }
);

// avoid callback hell with bind and lost references
var handler = {method: function (e) {
  alert(this === handler);
  e.currentTarget.off(e.type, this.boundTo(this.method));
  // bound to create a single bound reference per object
  // never loose a bound function again and find them
  // through the bound object these belong!
}};
document.body.on('click', handler.boundTo('method'));

Eddy.JS To The Rescue

Most of the event handling magic, fully compatible with node.js too, is an eddy.js enrichment (word you can easily swap with addiction once you start using it ;-))
Latest eddy news are the ability to be compatible with CustomEvent, speeding also up the trigger logic too overbloated until version 0.4, and the W3C trigger returned value behavior, always false when any involved listener in the event called preventDefault() and the event was cancelable a default when non a native Event object is passed, or regardless when eddy is not invoked through DOM nodes but rather node.js or just JS scripts.
For the DOM version only, eddy relies now on dom4 and, if needed to be supported, ie8 upfront.

Eddy Array Extras

One of the most handy parts about eddy is the ability to use on, off, emit, and trigger recursively within Arrays.
[
  document.body,
  [].slice.call(
    document.querySelectorAll(
      'very.complicated:selector'
    )
  )
].on('click', function (e) {
  alert(e.currentTarget);
});
Being recursive, an Array of objects or DOM elements, containing eventually sub elements, can perform a single addEventListener like operation at once, promoting recycling of functions which aim is to provide the same functionality in different events or situations.

Got It, But What Does It Mean In Terms Of Bandwidth ?

The size of both eddy and dom4, the most common configuration you gonna need to provide, is 2.5 KB minzipped. Everything else is browser and sOS file dependent.

Enjoy theese days, since those when ES6 will be released will be much harder and confusing than current status as it has been, and probably always been, for the Web when new things arrive and nobody is fully ready ^_^

Thursday, September 19, 2013

iOS7 - The Return Of The Splash Screen

Spoiler: code and examples showed or used in this post are meant to be working, for demo purpose, on iOS7 iPod or iPhone only. One day I might improve the demo in order to work the same in other browsers too.

If you haven't read yet this great post from @firt entitled Safari on iOS 7 and HTML5: problems, changes and new APIs you probably should before you can understand what is this about ... done? Great!

Some Clarification

Maximiliano took mobile twitter to show what happens when you visit a webpage in landscape mode where no previous, well known and tested, iOS trick would make it in order to have a better, full screen resolution.
However, problems are the same in the portrait mode: there's no way, if you use any overflow: auto; or touch in your page, to see the magic morph happening in the surrounding Safari Mobile UI.
On top of this, when you apply or pay for Apple Beta Testing or Developer, you are under an NDA/EULA you can hardly report problems even to Apple itself since it's very easy to be in troubles even saying "hey, I've tested stuff here and didn't work" ... where the place is publicly reachable and you are showing premature problems... right?
... oh well, back to this post ...

ReIntroduction To Safari Mobile Full Screen

One example is usually better than thousands of words ... so here the example that will work as expected on iOS7 on iPod or iPhone, 'cause as Maximiliano said in his post, there's no way to obtain a proper full screen on iPad (but probably that ain't needed anyway).

It's very important that you visit the example page before I can explain what is this about so please do it and come back after if the source code isn't explicative enough :)

Insert Coin, And Press Start!

Even if the Wikipedia definition is jurassic, since the splash screen has been used in Flash WebSite since ever and most of the time in a totally dynamic way, Apple gave us only one possibility to have a full screen on its devices: we need to teach the user how to reach a full screen experience!

Actually Not So Bad As It Sounds

Thanks to CSS position:fixed and a background that keeps moving regardless in the OS itself, we could make the road to the full-screen playful and fun.
Somehow, reaching the full screen in the phone through the browser could be used as leveling-up the user itself when it comes to a game.
In my example you can try to rotate, change, do things with the device, until you reach again the full screen possibility. Why couldn't this be an option? It works in both portrait and landscape too and we have an extra "bonus" for the playing user ^_^
As summary: isn't a playful splash screen a way less boring introduction to an app or game than just a static image, as native apps guys are used to these days? ;)

The Trick In A Nutshell

In order to be able to reach a full screen in iOS7, the body of the document must have style.height = (screen.height + 1) + 'px';, where the height needs to be the width once in landscape ...
And that's pretty much it! Test This Page if you don't believe me, and you'll notice that just scrolling up in both portrait or landscape section you'll find yourself on a full screen.

Enjoy yet a new modern web era hack because we haven't learned yet that W3C APIs are there for good reasons and nothing bad would have ever happened simpyl following them rigorously.


Pssss, hey ... in portrait, just screen.height will do, without the plus one ... however, plus one puts the browser in a sort of "must go full screen" state so ... you've been warned ;-)

Is iOS7 Full Screen ?

If you'd like to go dirty and obtrusive with this approach, here a very simple functions that could help you instruct users when you recognize the screen is not full anymore:
function iOS7isFullScreen() {
  return Math.abs(orientation) == 90 ?
    innerHeight == 320 :
    innerHeight == 529
  ;
}
Feel free to adjust size if needed or use display to check when is necessary and not, as example, in a setInterval.

Friday, August 09, 2013

My First Experience On Developer Boards

I should have probably titled this post as Marsboard VS Raspberry Pi VS Cubieboard VS pcDuino and so on but that would be actually unfair since these boards are very, very different from each others.
This post is about what I've learned from such delightful and hostile at the same time Software, over such state of the art and unthinkable, at least until few years ago, piece of HardWare!

What Are You Looking For

This is the very first question you should ask yourself before ending up like me: with all these boards and completely different projects/ideas behind each of them!

Just For Fun

In this case the Raspberry Pi is probably your best choice.
The community is awesome as well as its support for any related gotchas!
I've stopped counting the amount of kickstarter projects related to the little Pi and I won't link here any of them to be fair with all others: freaking cool ideas!

Not Fun Only

Here the Raspberry Pi is still one step forward ... I mean, you will rarely find a dedicated server within a professional colocation for something like $50 dollars per year, right?
And guess what, I've already developed a fully featured website based on polpetta for node.js, able to keep low RAM consumption agains the slightly heavier, but surely more complete, Express JS Framework.

Arduino Compatibility

Well, the name announces itself and pcDuino is the best choice here without rivals. The compatibility is great and the community not so small but best of all, Arduino related programming is quite updated too.
Powerful Allwinner 10 bare bones here for better performance over the Raspberry Pi to be able to do something more too and ... talking about performance ...

Best Overall Features

Here the Cubieboard has them all with a very proactive community everyday building the next HW Accelerated thing on it, a great variety of Operating Systems supporting it, a formal Allwinner Engineer behind it (Tom Cubie), and the best of all these boards in terms of performance and features.
But again ... is this what you were looking for?

Tiny, Cutie, Powy, Cheapy ...

Well, that's Marsboard, unfortunately the less supported board out there, a board that only 6 days ago realized that maybe it was important to be recognized properly in linux-sunxi instead of burning RAM modules per each developer that was using Cubieboard specs on top of this one ...
Its forum has an eco in it, with different developers complaining about apparently broken/non-working A20 boards (the one behind the scene in the first picture) but that said ... Marsboard, at least the A10, is most certainly the smallest one out there with almost same power as the Cubieboard!


Even smaller than Raspberry Pi, since latter one has slightly bigger board with a completely asymmetric layout due video, network, and a huge SD card I have no idea who thought was cool to have in such way (easy to unplug, easy to accidentally remove, completely out of design principles ^_^)
It's not shocking news that once you buy a proper box for the Pi, it will be almost as big as a Cubieboard indeed.


Specifications

Here roughly most important info you want to know if your aim, like mine, is to use these boards not as passive dumb Android zombies but as active machines able to do thousands of extra things via proper Linux Operating Systems!

Raspberry Pi (model B)

  • CPU ARM v6 700 Mhz
  • RAM 512 MB @400Mhz
  • NO Extra Storage
  • SD/HC/XC Card
  • BCM2835 SoC

pcDuino

  • CPU ARM v7 1Ghz
  • RAM 1GB @408Mhz
  • 2GB Nand
  • Micro SD/HC/XC Card
  • A10 SoC

Marsboard

  • CPU ARM v7 1Ghz
  • RAM 1GB @360Mhz
  • 4GB Nand + SATA 2
  • Micro SD/HC/XC Card
  • A10 SoC

Cubieboard

  • CPU ARM v7 1Ghz
  • RAM 1GB @480Mhz
  • 4GB Nand + SATA 2
  • Micro SD/HC/XC Card
  • A10 SoC

Focus The One You Want

Actually just an excuse to link the latest picture about these boards via Lytro camera (I am an effing Geek, I know!), but actually all Info I wish I new before are basically here ... let's see what have I missed?

One Operating System To Rule Them All

Yes, that's true story, if all these boards have a different purpose because of support, gadgets, cuteness, or power, Arch Linux ARM has been my best choice thanks to its performance, minimalistic but functional environment you can build from the ground up as you need without needing to go down the serial port (most of the time :D) and with a great support from a very active community able to bootstrap into node.js in about 7 seconds.

About Allwinner 20

This platform is not ready yet ... like, not at all ... but!
There are many developers behind trying to make things work because even if the boards look the same, and I have both Cubieboard2 and Marsboard with A20 (which never actually booted properly even with its native provided Android), Allwinner unhappily decided to not directly care much about Linux support and same goes for Mali and ARM which drivers are kinda locked behind this silly "available and compiled for Android only" policy I don't really get/understand at all ... oh well, never give up and keep waiting: soon these boards will give us extra dual core power! (but I tell you, is not that 1Ghz CPU is bad at all ... )

Tuesday, July 30, 2013

dblite: sqlite3 for nodejs made easy

OK, I know, the well known sqlite3 module is cool and all the glory to it ... well, it didn't work in my case :(

The Why

node-gyp is great but it's not as portable and does not scale as I'd like to.
If you try to use sqlite3 via npm in Arch Linux ARM, as example, even if the native sqlite library is there and usable that won't work ... moreover ...
What really bothers me is that node-gyp does not update within the system as any other system package would do.
You need to rebuild, recompile, re-do everything, even if you distributed a specific linux version that trust the package manager for updates and does not want to bother users with build tasks.
This is quite common in embedded hardware and related Linux distro so I've asked myself:
why on earth I cannot simply pacman -Syu once in a while and just have automagically built for me the latest version of sqlite3, the one the whole system is using and trusting anyhow, together with any other update including the node one?

The What

The repository is here!
So here the thing: dblite is nothing more than a spawn process over sqlite-shell with piped and handled io. Anything you could write directly in sqlite3 shell will just work through this module and everything that produces a result such SELECT or PRAGMA, will be parsed only once fully flushed and asynchronously at speed-light and without blowing the memory in order to create an Array of rows where these could be either transformed into objects, or simply as Array of fields.
Here the equivalent of the first sqlite3 usage example in dblite:
// node dblite.test.js
var dblite = require('dblite');
var db = dblite(':memory:');
var start = Date.now();

db.query('CREATE TABLE lorem (info TEXT)');
db.query('BEGIN');
for (var i = 0; i < 10; i++) {
  db.query(
    'INSERT INTO lorem VALUES (?)',
    ['Ipsum ' + i]
  );
}
db.query('COMMIT');
db.query(
  'SELECT rowid, info FROM lorem',
  // retrieved as
  ['id', 'info'],
  // once retrieved
  function (rows) {
    rows.forEach(eachRow);
  }
);

function eachRow(row, i, rows) {
  console.log(row.id + ": " + row.info);
  if ((i + 1) === rows.length) {
    start = Date.now() - start;
    console.log(start);
    db.close();
  }
}
Interesting note is that in my Macbook Pro above code performs in about 4~5 milliseconds against about 15~21 milliseconds using the sqlite3 module: 3X faster!

An Intuitive API ... Like, For Real!

I'd like to do a test now: I write down some code and you think about what the code does. After that, I tell you what it does, and you'll realize it's hopefully and most likely what you thought ... deal?
db.query(
  'INSERT INTO table VALUES (?, ?)',
  [null, 'some text']
);
db.query(
  'INSERT INTO table VALUES (:id, :value)',
  {
    id: 123,
    value: "wat's up?"
  }
);
I believe you understand these are just inserts with automatically addressed and escaped values, am I correct?
Let's do something else!
db.query(
  'SELECT * FROM table WHERE id = ?',
  [123],
  function (rows) {
    console.log(rows.length);
    console.log(rows[0]);
  }
);
What do you say? A select with an id that will produce an output like this?
1 // the rows length
['123', "wat's up?"] // the row itself
OK, OK, you got that ... how about this one then?
db.query(
  'SELECT * FROM table WHERE id = ?',
  [123],
  {
    id: Number,
    text: String
  }
  function (rows) {
    console.log(rows.length);
    console.log(rows[0]);
  }
);
Would you ever bet this is the result in console?
1 // still the rows length
{id: 123, text: "wat's up?"} // the row
How about all together?
db.query(
  'SELECT * FROM table WHERE id = :id AND value = :value',
  {
    id: 123,
    value: "wat's up?"
  },
  {
    index: Number,
    value: String
  }
  function (rows) {
    console.log(rows.length);
    console.log(rows[0]);
  }
);
Yep, validation will populate the resulting row as {index: 123, value: "what's up?"} since this is how properties can be remapped in a query results: specifying object properties names adding validations to the result.
db.query(
  'INSERT INTO users VALUES (?, ?, ?)',
  [null, 'WebReflection', '1978-05-17']
);
// what can we do with that date as string?
db.query(
  'SELECT * FROM users WHERE name = ?',
  ['WebReflection'],
  {
    id: Number,
    name: String,
    bday: Date
  },
  function (rows) {
    rows[0];
    /*
    {
      id: 35,
      name: 'WebReflection',
      bday: [object Date]
    }
    */
  }
);
As summary, here is how the query method works: a SQL statement, optional fields to escape for the query, optional fields to populate results as objects instead of arrays and optional validation per each field where the default is always String.
I believe this is straight forward enough but if I am wrong please tell me your idea of intuitive API after playing a little bit with this query one, thanks :)

The Target

Raspberry Pi, Cubieboard, and other ARM based Hardware are the main tested platforms and if it goes fast there, it goes fast everywhere.
As written and tested in the main github project page, it takes 0.178 seconds for 100 inserts in a SD Card and Raspberry Pi while it takes on average 30 milliseconds to fetch 200+ rows at once and memory consumption is considered too.
I will test properly sqlite3 module performance against this one but I believe there are many cases this wrapper for a single spawn object could surprise in term of performance delegating all the horses power to the native sqlite3 shell without bindings around.

Enjoy!

Wednesday, July 24, 2013

IE8 Is More W3C Standard

I know I am probably late for this party but I am re-exploring here and there Desktop web development and developers approach to common Desktop problems.
Pointless to mention that IE8 is probably the biggest one so here I am with some good news!

W3C DOM Level 2 Implemented In IE8

You read that properly, including custom bubbling events!
This is how much this ambitious but overall slim project is about, trying to harmonize IE8 and IE8 only so that we can focus more on standard W3C DOM API without requiring any external library at all unless necessary.

Tests To The Rescue

If you don't believe it or you would like to try it in a real IE8 (no simulated) browser, here the interactive link that will work with all modern Desktop and Mobile browsers plus IE8 thanks to the mentioned library.
The test is interactive in the meaning that at some point an action, a real one from the user, is expected such a click, an input focus or an input blur so that all tests, synthetics and not, are properly tested and behaves as expected.

Readapting W3C Style

The old IE8 gotchas are still there where the most disturbing one in this case is the following:
element.addEventListener(
  'x:event',
  function problem(e) {
    element.removeEventListener(
      'x:event', problem, false
    );
  },
  false
);
Above example will fail in IE8 because of the expression bug that creates an outer scope reference to a declared function. In few words, the assigned method won't be problem but it's referenced expression.
var really = function problem() {};
// only in IE < 9
alert(typeof problem === 'function' &&
  // note these are different
  problem !== really
);
// true
Unbelievable? Well, an old gotcha already demystified where we have 3 easy solutions:
// first solution
var solved;
element.addEventListener(
  'x:event',
  solved = function solved(e) {
    element.removeEventListener(
      'x:event', solved, false
    );
  },
  false
);

// second solution
element.addEventListener(
  'x:event',
  function(e) {
    element.removeEventListener(
      'x:event', arguments.callee, false
    );
  },
  false
);
Well, yes, arguments.callee is still a thing and a very important and useful one in IE less than 9 world: go for it if you are supporting this browser!
A third solution reminded me by David is the following, based on function declaration only:
// third solution
function solved(e) {
  this.removeEventListener(
    e.type,
    solved,
    false
  );
}
element.addEventListener(
  'x:event',
  solved,
  false
);

eddy.js Is Available Too

That's correct, the eddy test page you might want to test with IE8 too should be completely green which means that IE8 could use without problems the eddy.js core library: isn't this awesome?

Thursday, July 18, 2013

eddy.js - A Bold Approach

Not the first time some project of mine ends up in JavaScript Weekly, however this time something different happened since eddy.js reached my top 5 projects list in my personal Github repo in about a week since my first tweet and I've never even blogged about that and: This. Is. Awesome!

eddy.js In A Nutshell

JavaScript is the easiest event driven programming language I know and every library out there knows this too!
jQuery, node.js, these are just the most successful JavaScript API out there and mostly based on event driven development, either with asynchronous API and libraries behind or based on DOM behavior as jQuery is for sure.
eddy.js aim is to bring the ease of an event driven application everywhere, hopefully matching everybody taste in both DOM and server side world too!

Fairy Tales ... Less!

Let's face reality: except for switching the light on or off where .switch() is most likely the most semantic and probably a better method name you could think about for such operation, we are used to think that on is a prefix for any sort of event (onload, onresize, onmouseover, onsomethinghappened) and still .on() is the method to define an event handler in most successful libraries since JavaScript time.
Moreover, switching the light on or or off is really a 1% of time edge case action in web/JS programming since the usual habit is disabled or enabled or, electricity speaking, connected.
OK, OK ... if you are programming some developer board based on relays to switch on or off things, these words could have a bigger meaning for you but seriously, in JS, and since about ever, these have been the way to understand events, and event driven behaviors!

eddy.js To The Rescue

In about 1KB minzipped, eddy.js transforms any object and only if needed into an EventTarget or, in a node.js world, EventEmitter, being absolutely as CPU and memory safe as possible, providing though enormous advantages for any kind of object!
var generic = {};
// OK, at this point I'd love to know if
// this object will ever fire a notification
// no matters what it does
// I just would like to register ... but how?
// wait a second, Hell Yeah, eddy.js!!!
generic.on('hell-yeah', notifyMe);
It's really that simple: the moment we need an event drive behavior, we are sure that any object that inherits from Object.prototype will expose those methods we all love and those will work as we expect! Isn't this much simplified than following code?
// the current node.js way
function PretendingThisIsFine() {}
PretendingThisIsFine.prototype = Object.create(
  require('events').EventEmitter.prototype,
  {
    constructor: {
      value: PretendingThisIsFine
    }
  }
);

// later on
var generic = {};
var boring = new PretendingThisIsFine();

// promoting generic now ... but wait ...
// what if it was already defined?
// this will fail like hell since all
// behaviors related to the object are lost!
generic.__proto__ = PretendingThisIsFine.prototype;
// now generic is not the object we know anymore
// this is not a solution!
So here the good news, trusting a simple 1KB library that could be served in any CDN, above scenario could simply be like this:
// the specific EventEmitter class?
// function YAGNI() {}

// later on
var generic = {};

// whenever we need it ...
generic.on(event, behavior);

Slightly Obtrusive But Totally Pragmatic !

Back in those days where libraries defined new needs for JavaScript as Prototype library and its Function#bind did, we are also in a different time where Object.prototype properties can be extended as not enumerable so that every for/in loop will be safe, how great is that?

... And What About IE8 ?

Yes, as much as I am reluctant to this topic, it's supported!
However, right now only for the JScript part, hopefully pretty soon the whole package with DOM too!
Anyway, if you are here worrying about IE8 and Object.prototype pollution and for/in problems, I've got some good news for you:
  • you are still supporting IE8, you are using obj.hasOwnProperty(key) in every loop so you are safe
  • we are in 2013, IE8 survived from 2009 until now but if today some library will start to pollute in a non enumerable way the Object.prototype that's just OK since your IE8 code should be aware of for/in problem, isn't it?
  • if you think it's too late to spend time refactoring for IE8 ... well, you nede to do that pretty soon regardless so this might be your very best occasion to adopt a simplified approach to event driven development: go for it!

Still Some IE8 DOM Gotcha

I am trying to create a way to obtain, and in way less lines of jQuery code, a unified behavior for IE < 9 so that synthetic events can propagate basically same way DOM events would do there but for any other IE8 based library, eddy.js should be just fine!

Already Compatible

If you add eddy.js after libraries that relies in a weak .on() feature detection, eddy.js will be sitting there for all other objects that did not match library criteria so that eddy.js is tested as compatible against jQuery and all other libraries.
Give it a try, please, and let me know what didn't make you happy, if anything except some old school morality about extending the Object.prototype ... I mean, there are no other ways to obtain same usefulness if not enriching the prototype root, isn't it?
Or maybe you want to go down with this chaotic pattern?

Tuesday, July 09, 2013

Some JS Descriptor Trick

I am back home for 20 minutes after 10 days of vacation in Italy and already bored so which better way than talk about some wizardish trick about JS descriptors? (yes, they were showing again the great wizard of OZ in the United Airline flight from Frankfurt ... thanks for wondering...)

The Recycling Trap

Descriptors can be recycled without problems and reused to define same things here and there. However, there is an undesired side effect about recycled descriptors: these works with constant/static values, getters, or setters, but are unable to change behavior in their lifetime.
var propertiesDescriptor = {
  shared: {
    value: Math.random()
  }
};
Above descriptor is just a basic example where Object.defineProperties({}, propertiesDescriptor) will pollute the empty object with always the same random value.
var a = Object.defineProperties({}, propertiesDescriptor),
    b = Object.defineProperties({}, propertiesDescriptor);
a.shared === b.shared; // true !

Describe A Descriptor With Descriptors

This sounds like a descriptorception and it's actually exactly that one: a property described as property descriptor, able to be different every time the descriptor is used to define properties.
Here how we could enrich the propertiesDescriptor object in order to have a runtime property too.
Object.defineProperty(
  propertiesDescriptor,
  'runtime',
  {
    enumerable: true,
    // a getter is required
    get: function () {
      // so that every time the object
      // is used as properties descriptor
      // this returned value will be used
      // as "runtime" descriptor instead
      return {
        value: Math.random()
      };
    }
  }
);
We can perform the check one mor time now against same code.
var a = Object.defineProperties({}, propertiesDescriptor),
    b = Object.defineProperties({}, propertiesDescriptor);
a.shared === b.shared; // true !
a.runtime !== b.runtime; // true again, hooray!

Non Scalar Only Values: Achievement Unlocked

This is pretty much what we have achieved with latest trick: the possibility to recycle and reuse a descriptor being sure this will hold all static/scalar/constant properties as we wanted, and also create new objects, methods, or features, each time the same descriptor is used to define one or more property.

A Basic Counter Example

Let's say we'd like to know how many times the same properties descriptor has been used during a program lifecycle, you know what I mean ? All together:
var propertiesDescriptor = {
  shared: {
    value: Math.random()
  }
};

Object.defineProperty(
  propertiesDescriptor,
  'runtime',
  {
    enumerable: true,
    get: function () {
      this.__count__++;
      return {
        value: Math.random()
      };
    }
  }
);

Object.defineProperty(
  propertiesDescriptor,
  '__count__',
  {
    writable: true,
    value: 0
  }
);

var a = Object.defineProperties({}, propertiesDescriptor),
    b = Object.defineProperties({}, propertiesDescriptor);

alert([
  a.shared,  // 0.1234
  b.shared,  // 0.1234
  a.runtime, // 0.5678
  b.runtime  // 0.8901
].join('\n'));

alert(propertiesDescriptor.__count__); // 2

Lazy Value But Not Lazy Property

The last example is about creating a descriptor with a specific property that will create a new value once the descriptor has been used with a generic object.
This time is about creating a lazy property value only when accessed through the extended object and not through the property name descriptor ... right ?
var propertiesDescriptor = {
  shared: {
    value: Math.random()
  },
  lazy: {
    configurable: true,
    get: function () {
      return Object.defineProperty(
        this,
        'lazy',
        {
          value: []
        }
      ).lazy;
    }
  }
};

// OR
Object.defineProperty(
  propertiesDescriptor,
  'lazy',
  {
    enumerable: true,
    value: {
      configurable: true,
      get: function () {
        return Object.defineProperty(
          this,
          'lazy',
          {
            value: []
          }
        ).lazy;
      }
    }
  }
);
At this point the object, let's say a generic constructor.prototype, will be described as shared accessor so that each instance, together with the prototype itself, could redefine that property only when accessed.
This is in theory better for memory usage and GC operations but hey ... I've said since the beginning these were just tricks, isn't it?
Sim Sala Bim!

Wednesday, June 12, 2013

On Harmony JavaScript Generators

Developers get easily excited when something so used, acclaimed, and desired in another land comes to their own ... or the one they think they own ...
This is the case of ECMAScript 6 Harmony Generators, something at this time you need to activate through the --harmony flag in node or going to about:flags in Google Chrome Canary url bar and enable experimental harmony/extension.
Once you've done that, you'll be able, still through Chrome Canary at this time, to test examples, benchmarks, and other things in this post ... ready? So here the first great news:

Generators Are Slower

At least two times slower than forEach and about 10 times slower than regular loops.
This is the result showed in this jsperf benchmark you can run too.
Of course generators are slower, there's a massive process behind each generator such:
{
  code: 'the generator function body',
  context: 'the bound/trapped context',
  scope: 'the whole scope used by the generator',
  handler: 'to perform iterations',
  state: 'the current generator private state',
  // inherited from GeneratorConstructorPrototype
  send: 'the method to address values',
  throw: 'the method to throw Errors',
  next: 'the method to keep looping'
}
// plus every 'next' or 'send' call
// will return a fresh new object
{
  value: 'the current value at this time',
  done: 'the boolean value that helps with iterations'
  // when done is true, an extra call
  // to .next() will throw an error!
}

Not Only Slower ...

The fact every interaction with a single generator creates N amount of objects means that the garbage collector will work more than necessary and the RAM will be easily saturated whenever your server does not have such big amount of it ... and cheap hosts are still the most common choice plus if the program/language is greedy, why should you spend more in hosting hardware? You should not, as easy as that.

... But Feel Free To Use Them

If you believe generators can help anything in your logic, infrastructure, system, and you don't need best performance for that situation go with generators. These have been used in Mozilla internals for a while since there when Firefox was version 3 or even lower, can you believe it?
So, if these worked before becoming part of a standard and before hardware was as good as it is today, there must be use cases where generators are a better choice ... right?

JavaScript Never Needed To Sleep !!!

Unfortunately, a part for some academic Fibonacci exercise or even worst some sleep(delay) example, there's no much more you'll find about how cools are generators in JS .. simply because JavaScript never really needed them, being an Event handler oriented/prone programming language, where events always worked even better than generators for other languages since events can be triggered at any point, not just in a synchronous "top-to-bottom" flow.

Coming From Mars

One common problem in JS is that every new comer would like to find what is missing the most from her/his own old programming language ...
  • PHP developers never complained about missing types, they'll rarely get how prototype inheritance works there though
  • Java developers complains about missing types ... they'll try to use the JS flexibility to make it as similar as Java as possible understanding inheritance slightly better than PHP devs and abusing closures by all means to make it as super() compatible as possible 'cause ParentClass.call(this); inside ChildClass constructor freaks them out
  • C# developers think they have all the best there ... forgetting C# is not statically compilable and it is derived from ECMAScript 4th Edition, almost 2 editions before current JavaScript specification ^_^
  • C++ developers will propose new optimized Virtual Machines every day and most likely will probably never use JS ... still they will decide how JS developers should use JS regardless
  • Python and Ruby developers will just laugh about all JS shenanigans thinking their favorite language has none of them or worst
Well, here the thing ... generators and yield keyword are really old concept from languages that have not being created to work asynchronously as JS does, included all those mentioned in above list.
That's why I believe generators aim is being misunderstood from JS community ... and once again, feel free to use them as much as you want, but please keep reading too, thanks!

Queuing The Delay

if you start waiting for events after other events in a generator way:
var file1 = yield readingFile('one'),
    file2 = yield readingFile('two'),
    combined = file1.value + file2.value;
Here the bad news: that won't work magically as you expect!
// a magic function with many yields ...
function* gottaCatchEmAll(fileN) {
  for (var i = 0; i < arguments.length; i++) {
    yield arguments[i];
  }
}

// a magic expected behavior that won't work
// as many might expect ...
var content = gottaCatchEmAll(
  'file1.txt',
  'file2.txt'
);
Until we call content.next(), we eventually store the object value if no error has been threw and the done property is false, no parallel file loading will be performed by all means!
That's correct, what node.js elegantly solved with what JS was offering already, is screwed again with this new approach that won't block and won't execute at the same time.

Still Room For New Users

The controversial part about generators is that these might be useful to synchronize sequential, inevitably delayed or dependent executions while still non blocking other handlers ... well, here a couple of thoughts:
  1. try to make a generator behave as you expect ... seriously!
  2. try to learn how to use a queue instead
Not kidding, the second part is much easier than expected plus is a Promise like approach compatible with every environment and it fits in a tweet.
function Queue(a,b){
setTimeout(a.next=function(){
return(b=a.shift())?!!b(a,arguments)||!0:!1
},0);
return a}

How Does That Work?

I've tried to explain that in details in this working with queues blog post and at the same time I have written a slightly improved queue so that arguments can be passed between callbacks.
var fs = require('fs');
var q = Queue([
  function onRead(queue, args){
    if (args) {
      // add result to the content
      queue.content.push(args[1]);
      // if there was an error ...
      if (args[0]) {
        // attach it to the queue object
        queue.error = args[0];
      }
    } else {
      // first time execution
      queue.content = [];
    }
    // if there's anything to read
    if (queue.files.length) {
      // add "priority queue" to itself
      queue.unshift(onRead);
      // so that once done ...
      fs.readFile(
        // ... reducing the number of files to read
        queue.files.shift(),
        // ... will be re-executed
        queue.next
      );
    } else {
      // simply fire the end of this thing
      queue.next();
    }
  },
  function theEnd(queue) {
    // if there was an error ...
    if (queue.error) {
      // throw it or do whatever!
      throw queue.error;
    }
    // otherwise simply show results
    console.log(queue.content.join(''));
  }
]);

// files to load
q.files = [
  'file1.txt',
  '/user/attempt/file2.txt'
];

OH Come On What Is That

If you think dealing with generators is easier and the real effort behind the yield keyword is less verbose than above abstract example over a single use case, I am here waiting for your link to show me the ease, the cross version/platform compatibility, the performance (and I am not talking about your latest MacBook Air device but hardware Raspberry-Pi like which is suitable and already used as a web server) of your generator based solution willing to reconsider my point of view and change some module in order to switch, even if not needed, to this new approach.
Right now I see this new entry as completely overrated, able to bring fragmentation between node.js and the Web, and unable to concretely simplify or solve parallel asynchronous operations as elegantly as events would do through emitters.
Thanks for your effort reading 'till the end.
Some comment outside this blog:
  • Alex Russel on performance, and my reply which is: bound functions are still slow. I am not expecting generators to be faster than bound functions at any time in the near future

Wednesday, June 05, 2013

ArchLinux Might Not Play Cool

Update

This rant, referred to that time, is still valid. However, the lastest ArchLinuxARM package has been updated after the big change and everything works again as expected. Thanks Arch Linux for updating all packages, appreciated!
I'm actually avoiding a title such WTF ArchLinux to not leave a mark for this awesome community ... I mean, the best of the best: an always updated Linux distro for many devices and architectures blazing fast on boot time and freaking lightweight so you can put any extra you want and nothing more .. how can anyone ask more ...

Kids Play Too Much There

No, really ... this is not about blaming anyone specifically but seriously, a change able to brick every device ain't cool guys, not at all.

Cannot Trust Pacman Anymore

The main tool designed to update your system is screwed for basically every single tutorial about installing Arch Linux I could find on the net. pacman is doomed by an update, philosophically superfluous, able to make the update manager itself a joke.
That's correct, now you need to update all tutorials out there that says that
to update your system, you should simply pacman -Syu and working magic happens
Now every single piece of internet related to pacman or how ArchLinux is updated won't work anymore.

Cubieboard Is Just One

Every single article that will make you happy about installing ArchLinux in this board, or any other, will piss you off if source files you download are before 4th of July 2013 because the moment you can say hooray, it worked and you'll keep following instructions that will tell you best thing ever to do after a successful ArchLinux installation is to pacman -Syu as first command will nicely fuck up all your effort.

Focusing On Something Else

I am a developer and I love tools. The OS is just one of them to me, the one that makes me work and experiment with things I do for working or things I do for myself.
If I cannot trust the fact Linux is there since ever and every related OS is using some rootfs structure that worked without problems untile now, I don't want to find that everything that was using that structure won't work anymore because of some decision that is not practically friendly with any user in the community since it's not compatible with the single package manager the community is using ... I mean .. seriously ... WTF!!!

Just A Haughty Community ?

Worst part is that I was going to humbly open a post in your forum then I've realized to do that I have to register and answer this question in order to do that:
What is the output of "date -u +%V$(uname)|sha256sum|sed 's/\W//g'"?
I've never seen anything dumber that that .. first of all, if that's because a possible robot, most likely runs over Linux and is able to simulate the question wrapped in quotes and produce automatic result, if this was an idiotic robot attempt, secondly because the moment I want register to the forum is probably because one of your community "hackers" did such dumb mistake that I cannot even test my ArchLinux machine anymore ... so, whoever thought that was a good way to welcome your forum members, has a very regular IQ, if not lower than that.

Focus On Something Else

The moment a community with a vibrant and excellent product as ArchLinux is starts wasting everybody time behind these philosophical changes that break everything in facts is the moment the community needs to breath 5 minutes and think what the hell is going on and what's really needed to make the community better instead of pissing everybody off about some decision that no matter if was OK and right, it should NOT have broken all users around that were trusting the package manager to simply work as explained and emphasized everywhere on the WEB
Apologies for the rant but ... bloody hell, I've got all my boards screwed because of this "little change" in the whole OS.

Even Dumber

The page that is telling us how to update things is suggesting
# pacman -Syu --ignore filesystem,bash
# pacman -S bash
# pacman -Su
... too bad the current latest updated pacman cannot be installed without the updated bash too ... congratulations!

Saturday, June 01, 2013

The node.js Relative Path Case

Right now, it sucks because ___, as @izs told me to start with, I could not find a simple way to resolve a path from a module that exported a function into another one.
Spoiler: the reason I am trying to resolve paths is to load fresh modules runtime in polpetta. However, this might be a bad practice.
@WebReflection Lest I commit malpractice, I must tell you this is a terrible idea. Now warned, rock on with your bad self. Hack away :)
Still, my point is that it might be handy to be able to resolve paths relatively form the invoker regardless the why, even if you should always ask yourself that ;)
This case is quite easy to misunderstand so I'll skip extra explanations now and put down some code.

relative.js

This example file aim is to log ASAP two different path resolutions: the one from the path, passing through the process.cwd(), and the one from the relative.js file itself.
Object.defineProperties(this, {
  parent: {
    get: function () {
      // used later on
      return module.parent;
    }
  },
  resolve: {
    value: function (path) {
      // it will resolve from this file
      // not from the invoker
      return require.resolve(path);
    }
  }
});

// path module resolves relatively
// from the current process.cwd()
console.log(require('path').resolve('.'));

// require resolves relatively from
// the current file
console.log(require.resolve('./relative.js'));
Running this from node terminal will most likely show something like:
require('./test/relative.js');
/Users/yourname/code
/Users/yourname/code/test/relative.js
Neither logs or resolution are actually OK if we would like to resolve relatively from that path.
If we would like to use that module method, talking about the resolve() one, we cannot trust the current path.
// will throw an error
require('./test/relative.js').resolve('./test/relative.js');

// will pass
require('./test/relative.js').resolve('./relative.js');
If we install that module through npm as global, or even local in some super folder, gosh knows where we should start the relative path resolution accordingly with the module itself, you know what I mean?

Being Relative To The Invoker Path

In order to be able to resolve relatively from the invoker, we need to know at least where is the invoker.
Thankfully, this is easy but be aware of the caching problem:
// relative.js
var
  path = require('path'),
  relativeDir = path.dirname(
    module.parent.filename
  )
;
this.resolve = function (module) {
  return require.resolve(
    path.join(relativeDir, module)
  );
};
At this point we can invoke the method as expected without having erros, from the process folder.
// will log the right path
require('./test/relative.js').resolve('./test/relative.js');
Good, we are able to resolve module names, problem is ... only from the very first one that required relative.js due module caching so that module.parent will be one, and only one, for any other module.

A Hacky Solution

In order to avoid the caching problem within the required module itself, I came up with such trick at the end of the file:
// .. same content described above ...

// remove the module itself from the cache
delete require.cache[__filename];
In this way every single module that will require('./some-path/relative.js') will have a fresh new version of that module so that module.parent, and its filename, will be always the right one: how cool is that?
I am able to resolve relatively from any outer module its path the same way require.resolve(path) would do inside that module which is exactly needed and the goal of require-updated so that any module can use paths as if these were resolved from the file itself in order to require some other file, relative, absolute, or globally installed.
Still I believe there should be a better way to do this ... what do you say?

Thursday, May 16, 2013

Object.setPrototypeOf(O, proto) IS in ES6

This one is a short one, just to confirm that finally Object.setPrototypeOf(obj, proto) made it into ES6.
The section 15.2.3.2 speaks clearly:
15.2.3.2 Object.setPrototypeOf ( O, proto )
When the setPrototypeOf function is called with arguments O and proto, the following steps are taken:
  1. If Type(O) is not Object, then throw a TypeError exception.
  2. If Type(proto) is neither Object or Null, then throw a TypeError exception.
  3. Let status be the result of calling the [[SetInheritance]] internal method of O with argument proto.
  4. ReturnIfAbrupt(status).
  5. If status is false, then throw a TypeError exception.
  6. Return O.
Without going into details, the most basic polyfill woud be like this:
(function(O,s){
O[s]||(O[s]=function(o,p){o.__proto__=p;return o})
}(Object,'setPrototypeOf'));
If you want to go into details, the full reliable polyfill is hard to write down due all inconsistencies across JS engines out there where the __proto__ setter cannot be reused which means it's not possible to trust this magic over objects created from null.
All you need to do is forget __proto__ and use the suggested polyfill until the day __proto__ can simply disappear from those specs pages.
Enjoy!
This is a full polyfill with some extra info you might want to analyze, whenever ployfill is strictly true or false.