Saturday, May 14, 2016

Can I haz appearance:none; for the video element ?

I didn't think this was gonna be so hard to make a video NOT have native controls.

And it's not, at least not for the majority of web developers. You have a nice controls attribute on the video tag. Add it, you have native controls, don't add it and you won't.

The thing is however that many sites for many different reasons specify their own controls using Javascript. And I'd like to continue doing the same for Wikipedia.

And here is my problem pictured above. I call it the 'Flash of Native Controls'. Something like this is extremely distracting for visitors of the webcontent.

The solution seems simple. Just remove that controls attribute from your HTML. But I'd rather not do that.. And the reason is because at Wikipedia, we have many re-users of our generated HTML content. But most of those re-users don't use the same JS stack. Removing the controls attribute means they won't have videocontrols. I want controls, I just want MY controls instead of those of the user-agent.

Many other solutions have been suggested to me, but all have downsides. Messing with the HTML structure (sorry, don't want that), using inline scripts (want that even less), using iframes (nice, but I shouldn't have to) etc etc.

Honestly, I see this as a styling problem. I should be able to specify this using CSS in the head element, so that when the first paint of my HTML content occurs (usually well before the JS starts executing), my video already has no controls. The poster remains in a 'non-interactive' state a bit longer, but in general i'm fine with that. People won't watch a video in the first 300ms, but they will notice all those flashes when I have 50 video players on a site and a 3G connection.

And we used to have this in webkit browsers. Using CSS we could specify something like:
video::-webkit-media-controls { display:none !important; }
However, that is referring to Shadow DOM and those are supposed to be isolated 
And there never was a Firefox/IE version of this to begin with so :(

Image by Jason Z
I figure this really is not much different from type attributes for an input. This is semantic meaning.. It doesn't and shouldn't mean that I want the native controls. And browser vendors have recognized this, because even though it's not part of any standard, they have over the years provided web developers with the appearance CSS property, to give browser developers the option to take over the look of these increasingly specialized input controls.

What I need is an "appearance: none" for the video element really.

So why is this not a standard yet, and why couldn't we use the same property for the video tag ?

Wednesday, May 4, 2016

El Capitan bluetooth woes

I got a new iMac recently and after transferring my account using Migration Assistant I rebooted the Mac. After this moment, for the life of me I could not get Bluetooth working anymore. In the menubar I had a nice icon with a squiggly line through it.

In the I could find nice informative lines like:

IOBluetoothUSBDFUTool[324]: Could not get IOBluetoothUSBDFU service

I went through tonnes of blogposts and forum posts with all kinds of advice. Mostly coming down to:

  • Unplug all USB devices, shut down and the restart
  • Reset your SMC
None of this really worked. I then started the Mac in Safe mode by keeping the shift key pressed. Now I was able to use Bluetooth. Of course, you can't boot your Mac in Safe mode all the time..... But still this indicates that some 3rd party component was messing up. Possibly a kernel extension, or at least a preference connected to a kernel extension.

I have a few elements installed, but I went with my gut feeling.. Logitech. I own a very nice M500 corded mouse, but the Logitech software has given problems with MacOS upgrades etc before and Googling around indicates I'm not the only one with this experience. I downloaded the installer and reinstalled Logitech (even though I already had the latest version installed). And low and behold. One reboot later things work again...

Not sure if it indeed was Logitech, or that Logitech just triggered a process that kicked the Kernel extension loader in some way to fix my issue, but at least I can use Bluetooth again.

Monday, April 25, 2016

Using git svn on El Capitan

I needed to use git svn to do a svn migration, and it was a very painful experience to figure out how to get that working again after I upgraded to El Capitan. It seems that with Apple's system integrity, Apple has broken both ruby and perl setups to some degree, and it's taking a while to catch up with that.

Short story:

  1. Install homebrew
  2. Install ruby from homebrew
  3. Install svn using homebrew
  4. Install a new version of git using homebrew.

Now you still get an error:
Can't locate SVN/ in @INC (you may need to install the SVN::Core module)
For this, run the following, to create some symlinks that are required for perl.
sudo mkdir /Library/Perl/5.18/auto sudo ln -s /Applications/‌-2level/SVN /Library/Perl/5.18/darwin-thread-multi-2level sudo ln -s /Applications/‌-2level/auto/SVN /Library/Perl/5.18/auto/
Dear Apple... Remember when stuff just worked ? When you cared about developers ?  This golden period in the early naughts ? Somewhat alike what Microsoft is beating you at now ? Please bring that back!

Monday, October 8, 2012

5 years of Article message boxes

Do you recognize these boxes ? Most likely you do. These are the very recognizable "amboxes", which is a short for "Article message boxes". They are often visible at the top of articles in Wikipedia and one of the most recognizable elements of those articles.

Today I noticed that these boxes are now just over 5 years (and a month) old. They were first introduced to the general public starting from September 2007. Their features are in short; a single consistent design, color coded for severity and purpose, dynamic but consistent in width (stackable), IE 5.5 and IE 6.0 compatible and a consistent parameter setup for its content.
And that is a big deal, because I still remember what it looked like before when it had none of that. There were dozens of templates with different widths, different colors, different spacing and they all had different parameters. [I've been trying to find an image from back then, but I haven't been able to find one. Please let me know if you find one.]

It seems just like yesterday that I was one many people participating in their creation. The main design idea of the color bars at the left side seems to have come from [[User:Flamurai]], who already envisioned this in november 2006 it seems, calling it 'Blanca'. It seems he is no longer editing, but I would still like to thank him for this wonderful simple idea that has been in use seemingly without much opposition for so many years now. Most of the implementation was spear headed by David Göthberg if memory serves me well.

The revamp led to an entire family of templates for notices for different kinds of pages in 2008, {{mbox}}, {{cmbox}}, {{imbox}}, {{tmbox}}, {{ombox}}{{asbox}} and {{fmbox}}. In the end the whole effort was a very collaborative effort in which 3 dozen or so active editors made important contributions, including well known names as MZMcBride, Anomie, Happy-Melon, David Levy, Quiddity, RockMFR, Remember the Dot, Ilmari Karonen, Father Goose, Ned Scott etc etc. A lot of effort, opinion and testing has gone into these templates back then and in my opinion, that is why they have been so successful for so long.

So to all those involved over 5 years ago in creating the new article message box styles, congratulations and a big thank you. Especially [[User:Flamurai]] and [[User:David Göthberg]].

Sunday, July 29, 2012

Bleeding edge or is it ?

As most people know, Wikipedia usually runs the bleeding edge code of MediaWiki. Currently new versions are deployed every 2 weeks. This is great, necessary and sometimes annoying for Wikipedians. There is a common complaint that MediaWiki treats Wikipedia as it's experimentation grounds.

On the other hand MediaWiki is overly focused on Wikipedia. Without Wikipedia, I think that the default MediaWiki would look a lot more like Wikia than like Wikipedia. In my opinion, if MediaWiki treats Wikipedia as it's sandbox then it does so because the only sandbox that compares to Wikipedia is Wikipedia itself. There ARE no other viable experimentation grounds that compare to the distorted reality of Wikipedia.

So how bleeding edge is bleeding edge? Code is deployed almost every 2 weeks, yet HTML5 has been the default for MediaWiki for over 3 years now, but has still not made it to Wikipedia for all sorts of compatibility reasons and accommodating to the volunteer tech community.

HTML5 mode is currently scheduled to be deployed this summer.

Saturday, March 31, 2012

MediaWiki; from svn to git & gerrit and a bit of math

Been a while since I wrote here. I wanted to discuss a great change that has come to MediaWiki, and it is the adaptation of Git and Gerrit over our old Subversion system. It has been discussed at length already, but I wanted to discuss the actual switch process and what it meant for me as an individual.

TLDR version: Little time, big switch, Gerrit needs lots of work, more coherent documentation needed and stay vigilant. Bad or Good cannot be stated yet.

Where I'm coming from

First of all, I should clarify that I already used Git quite a bit. We used it within VideoLAN and I use it myself almost on a daily basis as a wrapper around some of the Subversion repositories I use. So you could say that using it should not be too troublesome to me. I already know the commands and the principle ideas behind git and how they differ from other SCM systems. The only new addition is Gerrit...

I have little time on my hands to work on Wikimedia and MediaWiki these days. 3 hours total during the weekdays, 4 hours in a weekend and that's about it. Most of that time is spent on reading bugzilla, the Village Pump, mailing lists, updating my code or other things that are required to 'keep up' with current affairs. The rest of that time I tend to fill with smaller bugfixes, which easily fit within an hour or two of available time. So switching repo systems might seem like a small thing, but if you have 3 installations of MediaWiki and you need to convert them and install some additional software on the side to later submit your code, then that basically fills the time of one week that I can put in on the projects. Since I had quite a bit of time last weekend, I figured I'd better jump on the bandwagon right away, in fear of getting too far behind to catch up any time soon.

Doing the switch

All in all, the process of actually getting the code was easy for someone familiar with git. The most difficult bit was switching the 3 local installations that I use for testing stuff. I decided to cut down one, so that left 2. But still, switching out all the extensions in two installations with their git variants and finding out that actually some of my installed extensions were NOT switched to git but are still in subversion (after already having deleted them of course) was quite the task. Summary, installing new versions of git, git-review and updating half of my macports installed software; 1,5 hour. Switching the repos and their extensions, and migrating any stuff that I had changed locally but not had a chance to commit yet; about 2,5 hours to get everything up and running again and making patches of my local changes.

Changing workflow and a bit about math

Math in Wikipedia
Well everything fixed you'd think. Well sort of. The other issue I immediately recognized, was that the more problematic part in the long run, would be the change in workflow. For someone just doing about 3 hours of coding on average a week on a project as this, anything interrupting your workflow takes huge chunks of time out of those 3 hours. So I decided to better get on with it and learn right now, so I could identify blocking issues and see them solved as soon as possible. Where to start... I always find it best to pick 'small' identifiable chunks of work in these cases, that are somewhat isolated from the regular work flow. I picked the Math extension.

The Math extension is basically a LaTeX Math parser, that can output math entered in for instance Wikipedia and show it in an article using either HTML or rendered images. This is needed because the original web, really had no way to properly render math. It required too many symbols that were not part of any regular font, and the positioning of those symbols went far beyond what was easily possible with HTML. So the extension interprets the LaTeX code, and if possible uses HTML, but more often actually renders the text using a LaTeX renderer to a PNG image that it then includes in the webpage. There are many downsides to this approach, but for years it was the only way to do it remotely predictable. I'm also a bit familiar with the code as I had applied several patches to it in the past.

I love keeping an eye on stuff like this, the stuff that sort of limits the quality that Wikipedia can deliver in areas where it really does want to deliver. The same with mobile, video, music scores, text that has a different directionality then Left-to-right or text for which no fonts exist. Those tickets are often on my bugzilla watchlist.

Math using MathJax
For math, things have finally progressed after 8 years of status-quo. Over the past few years we have seen the maturing of MathML, the standard which is supposed to bring math to the web. We have also seen the coming of Webfont technology, and ever more advanced Javascript. All this has led to the creation of MathJax, a javascript library that uses what it knows about your browser, to generate readable math equations in HTML or MathML, and provides any reader with the fonts needed to properly represent this in your browser, regardless of how well your browser or OS supports math. It has several downsides as well. It is slow as hell and incredibly complicated, but the big plus is that it actually works without requiring images (unless you are on real ancient stuff like IE6).

For years math has been a hot debate within Wikimedia, with people desperately desiring higher quality and better reusability. With the advent and maturing of MathJax, that finally seems a possibility and MathJax was an oft requested feature by the Mathematics project on the English Wikipedia. A while ago, User:Nageh developed a MathJax user script to bring MathJax to Wikipedia. When Brion Vibber was looking into how to progress with Bug 24207, he was pointed at Nageh's script and decided to investigate if it was possible to start properly integrating MathJax into the Math extension to finally start to improve Math rendering in Wikipedia. He came back with some preliminary results pretty quickly.

Committing patches

Brion did the heavy lifting of making it MediaWiki ready, but as always with conversions such as these, there are always loads of 'small things' that need to get done before it can actually be deployed on Wikipedia. That made it a perfect area for me to test the new Git/Gerrit workflow by creating a few patches and submitting them to Gerrit.

My Gerrit dashboard with the changes in question.

My experience:

  • Gerrit is an awful interface. It's like going back to bugzilla if you are used to Jira. Much work will be needed there.
  • One of the downsides I found to Gerrit, is creating permanent links to users or lists of changes. I often want to look into a stream of a user's changes or look at a particular type of changes, and there is no way to do this other then to keep clicking in Gerrit until you find that list. Basically anything but permalinks to patch submissions and individual patches seems to be hard to get your hands on.
  • Git-review is taking away much of the Gerrit trouble, but still commands like: git fetch ssh:// refs/changes/71/3771/1 && git checkout FETCH_HEAD tend to be needed in my workflow and that's just unfriendly and difficult to work with.
  • Gerrit is great for individual patch review.
  • Gerrit is terrible at "huge rewrite review" at first look.
  • Gerrit is terrible for "many sequential patches building on each other" review AND workflow. I want to commit often and get small chunks reviewed if they are useful on it self, without squashing them in one huge patch that I then need to spend hours in review limbo over. It's however almost impossibly disruptive on the workflow as soon as something intermediate needs to be adapted half way one of those dependencies.
  • This will require much getting used to, especially for the newbies. We better set up a system to feed Gerrit with Git patch diffs created using git format-patch, it's much easier for the real n00bies.
  • We need to do more about the patch changes and how individuals have contributed to making the final patch. It's subpar in Gerrit in my opinion (most of the history is left in Gerrit instead of in the final merge).
  • I'm not sure yet if my workflow will be faster or slower, just yet, but I suspect slower. On the other hand, I think we will see more submissions for code that we as developers are not really sure about. I have a slew of changes that require further testing and review, but due to lack of setup or feedback from other developers have basically been in limbo since forever. I plan to just submit them and see what happens. They might rot in Gerrit review limbo, but they won't rot on my HDD anymore. Hopefully having them in review will force someone to pick up the pieces.

A verdict ?

Is it good, or is it bad ? I'm not sure yet. My current guess is that it's a boon for the quality of the sourcecode, and that it will probably speed up the pace of development overall, as well as the workflow of the more experienced developers. I fear however for those still learning. The workflow has become incredibly more convoluted, adding risk of people giving up half way. That's not guaranteed to happen of course and I have hope yet that with improving Gerrit and working on our workflow and tooling documentation that we can stave that off. I know SumanahRoan and several others have been working tirelessly on just that. But we are not done by a long shot on that front I fear.

I also fear that MediaWiki is gonna be more like Wikipedia. A gigantic set of rules you need to fulfill before your article change/patch makes the cut, driving up the requirements we put on our editors/committers and widening the gap between those just getting started and the vets. We need to make sure to stay incredibly welcoming and make sure volunteers feel that their changes are accepted as they are, instead of overly reworked and rewritten to make the community cut. There is no feeling so rewarding for those still learning, as seeing your change go straight into a major piece of software. We need to keep a close eye on that as a software development community.

Thank you for reading

That's my story of the switch that I had to go trough almost a week ago now. Find more experiences by other developers on the Wikitech mailing list. And a shout-out to the Signpost for once again doing proper summarizing of a complicated topic in last week's edition.

Saturday, September 17, 2011

2011 and the Y2K bug

It has almost been 12 years since we all had to worry about the Y2K bug right ? Well you'd think. Over the past few weeks I have been bothered by a problem with session management in one of the apps that I'm writing. I couldn't figure out why stuff was behaving so unexpectedly. At some point the hints became clearer and clearer that the dated cookies of the session were for some reason not being expired. The iOS URLConnection and the android http lib seemed to continue to send them along to the server after logging out. This was hard to confirm though, because both platforms hide the Cookie header from you when you make the request, the connection was https and I didn't have physical access to the server.

It made no sense however that iOS would have a fundamental Cookie management bug. So I build a small server and started testing cookie management on the iPhone. Everything looked just fine. Then I decided that I would copy the actual cookies the server was sending to the clients. I could get these values, because the Set-Cookie headers from the response (unlike the actual Cookie header in the requests) was visible. So I switch the values of my test server to the actual values from the server and suddenly I was able to reproduce the problem. The Set-Cookie that was supposed to expire the cookie seemed to turn the cookie into an undated cookie (so scoped to the session of the client instance).

I'm switching back to my old values and stuff starts working again. Again I copy the original server values. I select the text and suddenly I notice it.... Expires=Sat, 01-Jan-00 00:00:00 GMT;  No... that can't be it. Could it ? I switch my test server to issue the year 1970 instead. Poof, suddenly it works. So first of all, 12 years after 2000 there is still a server sending a broken date format. And two, it seems the Y2K parsing support in iOS is broken. Experimentation shows that iOS can only parse double digit years in cookies between 70 and 99. So any double digit year before 1970 (epoch) cannot be converted into an actual year. And what happens if the date cannot be parsed ? Then the date is removed from the cookie altogether, and your cookie becomes a session cookie :D