Month: September 2018
PHP memories
AllGender:
<input type=”radio” name=”gender” value=”female”>Female
<input type=”radio” name=”gender” value=”male”>Male
<input type=”radio” name=”gender” value=”other”>Other
<br><br>
Ref
One feature – one branch policy
AllIntro
I remember when creating a new feature in a branch in my first week of work. Then I learned that it was much more easy this way. Otherwise we would need to rebase everytime from origin/dev.
One feature – one branch – (one commit)
So in some teams for each new feature, a new branch news to be created from dev(or masters), the feature is created and then the merge request is created.
This is quite simple and after the feature is merged, the branch is done. Create a new branch.
It’s very wise to squash for only one commit! yes, so the history is very clean! so nice. Sometimes I dont follow this though, I use two or three commits!
Ref
[1] https://medium.com/ki-labs-engineering/one-feature-branch-one-commit-4393aa0a96cd
Scons performance
AllMVC
AllMAD MAD MAD
AllMultiple Asurred Destruction
Game Theory
Choice supportive bias in train
AllIntro
Lesson
Ref
Head First Design Pattern and Christopher Okhravi
AllIntro
Design Patterns
REFs
RNN
AllIntro
A bit before finishing my master, with the help of Prof Orlando Silva, from Mackenzie during his Neural Network classes.
I started learning a bit of the Recurrent Neural Networks, aka RNN!
Not dead anymore
I was quite impressed with how powerful were they and the fact that the field stood still for so long time until recently.
So I tried learning it, from the basic to the more advanced. The math for me in the begin was difficult but after several book you kind of get use to it.
Resevoir Computing
Doing this small research I came to the Prof Murilo B, at Aberdeen University and his work with Reservoir Computing. He knew so many things and so many properties of those things.
I was impressed on how many things we could use it, from speech recognition to Chaos theory!!!!
Some properties
Since one can think about recurrent networks in terms of their properties as dynamical systems, it is natural to ask about their stability, controllability and observability:
Stability
concerns the boundedness over time of the network outputs, and the response of the network outputs to small changes (e.g., to the network inputs or weights).
Controllability
is concerned with whether it is possible to control the dynamic behavior. A recurrent neural network is said to be controllable if an initial state is steerable to any desired state within a finite number of time steps.
Observability
is concerned with whether it is possible to observe the results of the control applied. A recurrent network is said to be observable if the state of the network can be determined from a finite set of input/output measurements. A rigorous treatment of these issues is way beyond the scope of this module!
Git
I forked from Torch RNN library here [2], you should do the same!
Summary
I did a small summary of my research and if you have more interested it you can see it here[1]. It is very very small and summarized, i.e. 15/16 pages.
[1] https://docs.google.com/document/d/1fE_6TLHhz010YeYFa1g0SHfkDYWNzL8qFD5RoroY_rM/edit?usp=sharing
[C++ tips] Enums
AllIntro
I personally think we subtilize enums in all the languages. The sentence “Francisco, you can use a Enum here instead of int” comes to my mind.
C++
One thing that I like about c++ enums implementation is that you can break the flow:
All examples are from here [1]. I hope I don’t go to hell to use such clever examples here:
// color may be red (value 0), yellow (value 1), green (value 20), or blue (value 21)
enum color
{
red,
yellow,
green = 20,
blue
};
Blue is 21!! Like, breaking the flow!
Defect Reports
By the way, I don’t know if you like to search about Defect Reports, it is very helpful actually.
Spending some time on it might save your time in the future actually.
Refs[1] https://en.cppreference.com/w/cpp/language/enum
[2] http://open-std.org/JTC1/SC22/WG21/docs/cwg_defects.html#1638

