Been reading my own blog one night, and I've suddenly realize that I've written boring or too complicated stuffs with my not-so-perfect english.
It has become so lame that I manage to get my self fallen a sleep while reading them :-)
Me blog got to have the fun elements back.
That's why I wanted to blog in the first place, and that's what I think blogs should be about. Fun, uplifting stories (well, gotta admit, also need a bit of self-praising and full of ego at moments).
Hey, that could be one of my resolutions for 2008 :-)
2007's been an overall good year, let's work and hope that 2008 is a better one.
Happy 2008, everyone.
Sunday, January 06, 2008
Thursday, December 27, 2007
How do you install Adempiere with PostgreSQL on Debian/Ubuntu?
Having to repeat these steps over and over again (to help friends, I myself don't know anything -yet- about Adempiere or ERP in general), I better put this down here.
I know that there are great clues from adempiere's wiki (http://adempiere.com/wiki/index.php), but these steps are what is convinient for me, most are the same as in wiki.
First thing's first, download Adempiere package from sourceforge. I got this one:
Adempiere_331b
Install sun-java5-* packages (hint: aptitude install sun-java5-*, I choose to install all packages)
Configure pljava support for postgresql:
- Download the precompiled binaries package:
for postgresql-8.1: from pgfoundry.org
for postgresql-8.2: http://www.posterita.org/share/pljava.zip
- Configure so that postgres user is trusted (edit pg_hba.conf and do postgresql server restart after) While you're at it, remember to also configure adempiere user access in pg_hba.conf (I just set it to trust for local access)
- Copy pljava.so and pljava.jar from the pljava package to /usr/lib/postgresql/VERSION/lib
- Link libjvm.so (without it, pljava install will fail, I choose linking than adding entries to ld.so.conf) :
ln -s /usr/lib/jvm/java-1.5.0-sun/jre/lib/i386/server/libjvm.so /usr/lib/libjvm.so
- Add the following lines to end of postgresql.conf :
psql template1 -f /tmp/install.sql
while we're at it, remember to also create adempiere user:
createuser adempiere
- Restart postgresql server
Now after postgresql preparation ready let's go to adempiere. I usually just prepare an Adempiere directory on user's home. And make this setup:
/home/USER/Adempiere/Server
/home/USER/Adempiere/Client
Also do this:
export JAVA_HOME=/usr/lib/jvm/java-1.5.0-sun
export ADEMPIERE_HOME=/home/arief/Adempiere
and remember to put those lines on the end of ~/.bashrc too.
Then I extract the Adempiere package to the Server directory. Then copy and extract the AdempiereClient.zip (you may get it from Server/lib/AdempiereClient.zip) on Client directory.
on Server, I do:
chmod 755 *.sh utils/*.sh
Then execute: Server/RUN_setup.sh
On the configuration setup screen, I usually setup webserver port to: 8080 and 4433, so that I could run Adempiere as regular user. For database, don't forget to set it to postgresql. If on testing the configuration you failed to connect to the web or database, try changing the host to '127.0.0.1' or localhost or your computer name and re-test.
If test all passed, you can save the config (click the Save button) then just wait for the setup to finish.
After setup we can now import the database structure (also with some demo data) by running:
Server/utils/RUN_ImportAdempiere.sh
If all's fine with the postgres-pljava setup, we should move along just fine here. And when it's all done. We could start the Adempiere Server with:
Server/utils/RUN_Server2.sh
And then we could start client by:
Client/RUN_Adempiere.sh
at first run it will ask for configuration. Just fill it the values you put on the server setup.
After that, Client should started up and showing nice graphs and dashboards. This is where I get amazed and confused and pressed the Quit button :-)
Oh, for easier launch, I usually add these two files:
/home/USER/Adempiere/server_start.sh
/home/USER/Adempiere/client_start.sh
With contents like these:
#!/bin/sh
# server_start.sh
#
export JAVA_HOME=/usr/lib/jvm/java-1.5.0-sun
cd /home/USER/Adempiere/Server/utils/
ADEMPIERE_HOME=/home/USER/ADempiere/Server /home/USER/Adempiere/Server/utils/RUN_Server2.sh
#!/bin/sh
# client_start.sh
#
export JAVA_HOME=/usr/lib/jvm/java-1.5.0-sun
cd /home/USER/Adempiere/Server/utils/
ADEMPIERE_HOME=/home/USER/ADempiere/Client /home/USER/Adempiere/Client/RUN_Adempiere.sh
Then I could create nice launchers for them and put it on panel or desktop. Remember to set to run server_start.sh inside a terminal.
Hope I didn't missed anything.
I know that there are great clues from adempiere's wiki (http://adempiere.com/wiki/index.php), but these steps are what is convinient for me, most are the same as in wiki.
First thing's first, download Adempiere package from sourceforge. I got this one:
Adempiere_331b
Install sun-java5-* packages (hint: aptitude install sun-java5-*, I choose to install all packages)
Configure pljava support for postgresql:
- Download the precompiled binaries package:
for postgresql-8.1: from pgfoundry.org
for postgresql-8.2: http://www.posterita.org/share/pljava.zip
- Configure so that postgres user is trusted (edit pg_hba.conf and do postgresql server restart after) While you're at it, remember to also configure adempiere user access in pg_hba.conf (I just set it to trust for local access)
- Copy pljava.so and pljava.jar from the pljava package to /usr/lib/postgresql/VERSION/lib
- Link libjvm.so (without it, pljava install will fail, I choose linking than adding entries to ld.so.conf) :
ln -s /usr/lib/jvm/java-1.5.0-sun/jre/lib/i386/server/libjvm.so /usr/lib/libjvm.so
- Add the following lines to end of postgresql.conf :
custom_variable_classes = 'pljava'- Then to environment (/etc/postgresql/VERSION/main/environment)
pljava.classpath='/usr/lib/postgresql/8.1/lib/pljava.jar'
pljava.statement_cache_size = 10
pljava.release_lingering_savepoints = true
pljava.vmoptions = ' '
pljava.debug = false
JAVA_HOME = '/usr/lib/jvm/java-1.5.0-sun'- Copy install.sql to /tmp, for run by postgres user (hint: su to root then su - postgres), so that we could just run it by:
psql template1 -f /tmp/install.sql
while we're at it, remember to also create adempiere user:
createuser adempiere
- Restart postgresql server
Now after postgresql preparation ready let's go to adempiere. I usually just prepare an Adempiere directory on user's home. And make this setup:
/home/USER/Adempiere/Server
/home/USER/Adempiere/Client
Also do this:
export JAVA_HOME=/usr/lib/jvm/java-1.5.0-sun
export ADEMPIERE_HOME=/home/arief/Adempiere
and remember to put those lines on the end of ~/.bashrc too.
Then I extract the Adempiere package to the Server directory. Then copy and extract the AdempiereClient.zip (you may get it from Server/lib/AdempiereClient.zip) on Client directory.
on Server, I do:
chmod 755 *.sh utils/*.sh
Then execute: Server/RUN_setup.sh
On the configuration setup screen, I usually setup webserver port to: 8080 and 4433, so that I could run Adempiere as regular user. For database, don't forget to set it to postgresql. If on testing the configuration you failed to connect to the web or database, try changing the host to '127.0.0.1' or localhost or your computer name and re-test.
If test all passed, you can save the config (click the Save button) then just wait for the setup to finish.
After setup we can now import the database structure (also with some demo data) by running:
Server/utils/RUN_ImportAdempiere.sh
If all's fine with the postgres-pljava setup, we should move along just fine here. And when it's all done. We could start the Adempiere Server with:
Server/utils/RUN_Server2.sh
And then we could start client by:
Client/RUN_Adempiere.sh
at first run it will ask for configuration. Just fill it the values you put on the server setup.
After that, Client should started up and showing nice graphs and dashboards. This is where I get amazed and confused and pressed the Quit button :-)
Oh, for easier launch, I usually add these two files:
/home/USER/Adempiere/server_start.sh
/home/USER/Adempiere/client_start.sh
With contents like these:
#!/bin/sh
# server_start.sh
#
export JAVA_HOME=/usr/lib/jvm/java-1.5.0-sun
cd /home/USER/Adempiere/Server/utils/
ADEMPIERE_HOME=/home/USER/ADempiere/Server /home/USER/Adempiere/Server/utils/RUN_Server2.sh
#!/bin/sh
# client_start.sh
#
export JAVA_HOME=/usr/lib/jvm/java-1.5.0-sun
cd /home/USER/Adempiere/Server/utils/
ADEMPIERE_HOME=/home/USER/ADempiere/Client /home/USER/Adempiere/Client/RUN_Adempiere.sh
Then I could create nice launchers for them and put it on panel or desktop. Remember to set to run server_start.sh inside a terminal.
Hope I didn't missed anything.
Friday, December 07, 2007
Telecomm network service management
This is going to be a heavy one :-)
As telecomm network technology grows, the amount of complexities in systems and subsystems involved grows also. This may not be such a big deal if we are still focusing our attention at the network level. Only, we don't have that privilege anymore.
Telecomm subscribers has also growth and changed, the people learned to demand more, in quality and quantity of services they have. The word "subscriber" itself is a bit ambiguous nowadays, because it could point not only to people but also to other businesses, or even to other networks or equipments.
So, operators started to think about the problems from the Service point of view. By taking the view up one level it helps them to think more clearly about the problem at hand. But this also brought in another layer of complexities. In Network point of view, a problem may be located and fixed in just a node in a subsystem. In Service pov, things get alot more complex, service problems may be caused by several points in the network, in different subsystems.
The growth of network complexities and the ever demanding customers need to be taken cared of in a smarter way. When not so long ago we can limit telecomm services as just Voice services and may easily identify and fix a problem in the service, currently we talk about different kind of services with different requirements and different parts of network.
When in PSTN we only talked about Switchings and leased circuits, then in GSM we had increased complexity of MSCs, VLRs, HLRs, BSCs, BTSs, VAS, etc, and now in 3G we have more interesting things (Node-B, RNCs, new interfaces, etc) to play with and the trends will keep went up in the next years in the next generation networks.
Funnily, errr or is it sadly..??, I notice that network designers/engineers seems to be unnotified or ignorant about this issue, they keep designing and engineers "new-and-better" networks with extended complexities in each new release and still (atleast seems to me) putting this service quality issue behind.
In service quality management world, OSS (should) played much more important role than before, and people has seems to acknowledge about this, equipment vendors have started to think about this, third-party consultants started to said alot of things, some have also started to present "solutions" to help overcome this problem.
Note that it's almost always a "solutions" not a ready system/product. That's because this is still an area with most complexities, usually people tend to stay away from this kind of issue when they can. Some operators has started projects to handle the situation, some manage to get good results (or so I've been told), some are still striving to get it done.
Personally, I think that a "network and service quality management" solution is close to impossible to be implemented correctly. There simply not enough budgets and resources for this in many operators. But I still think that there are many un-explored ways to solve the problem.
... to be continued ...
As telecomm network technology grows, the amount of complexities in systems and subsystems involved grows also. This may not be such a big deal if we are still focusing our attention at the network level. Only, we don't have that privilege anymore.
Telecomm subscribers has also growth and changed, the people learned to demand more, in quality and quantity of services they have. The word "subscriber" itself is a bit ambiguous nowadays, because it could point not only to people but also to other businesses, or even to other networks or equipments.
So, operators started to think about the problems from the Service point of view. By taking the view up one level it helps them to think more clearly about the problem at hand. But this also brought in another layer of complexities. In Network point of view, a problem may be located and fixed in just a node in a subsystem. In Service pov, things get alot more complex, service problems may be caused by several points in the network, in different subsystems.
The growth of network complexities and the ever demanding customers need to be taken cared of in a smarter way. When not so long ago we can limit telecomm services as just Voice services and may easily identify and fix a problem in the service, currently we talk about different kind of services with different requirements and different parts of network.
When in PSTN we only talked about Switchings and leased circuits, then in GSM we had increased complexity of MSCs, VLRs, HLRs, BSCs, BTSs, VAS, etc, and now in 3G we have more interesting things (Node-B, RNCs, new interfaces, etc) to play with and the trends will keep went up in the next years in the next generation networks.
Funnily, errr or is it sadly..??, I notice that network designers/engineers seems to be unnotified or ignorant about this issue, they keep designing and engineers "new-and-better" networks with extended complexities in each new release and still (atleast seems to me) putting this service quality issue behind.
In service quality management world, OSS (should) played much more important role than before, and people has seems to acknowledge about this, equipment vendors have started to think about this, third-party consultants started to said alot of things, some have also started to present "solutions" to help overcome this problem.
Note that it's almost always a "solutions" not a ready system/product. That's because this is still an area with most complexities, usually people tend to stay away from this kind of issue when they can. Some operators has started projects to handle the situation, some manage to get good results (or so I've been told), some are still striving to get it done.
Personally, I think that a "network and service quality management" solution is close to impossible to be implemented correctly. There simply not enough budgets and resources for this in many operators. But I still think that there are many un-explored ways to solve the problem.
... to be continued ...
Human Computation
Just learn about this great idea from Dr. Luis von Ahn presentation here. The presentation is based on his PhD theses. I want to try to describe here, briefly, what he is doing. His idea has turn on lots of bulbs in my brain.
He start by telling the story of CAPTCHA (Completely Automated Public Turing test to tell Computer and Humans Apart). We all should by now probably know about CAPTCHA, and how does it help preventing spammers to get millions of free email account.
CAPTCHA is a computer-generated product that currently only human can solve but not other computer program. See the paradox? This is the basic idea of Human Computation. There are still areas where humans do better job than computers. And we do it well, way better than computers currently capable of.
What some of us may not know is that spammers have found a way to solve CAPTCHA. There are 2 basic ways (I myself knows about the first way, but only learn the second way from Luis' presentation).
First, spammers pay people to actually solve CAPTCHAs. But this has turn out costly for them. Second clever thing they done, is by using p*rn-sites. Spammers create p*rn sites that when visitors want to see more, they would have to enters words of a CAPTCHA (which behind the screen will then be submitted to Yahoo's email registration form). Being passionated p&rn lover, more often than not, those visitors will as fast as the can typed in the CAPTCHA :-)
The second example taught us a very interesting idea (in my words): We can use human computational power to solve computer-unsolved problems in a way that's actually "fun" for them, plus we don't need to spent a lot of money to pay them :-) And to know just how many wasted cycles of human computation is, Luis gave a figure: In 2003 estimated that there are > 30 billion man-hours wasted on playing the game Solitaire.
So what kind of problems that we may solve? Turn out there are a lot of them. Luis gave example of giving better descriptions to images on the web, so that a search for images could give better results.
To solve that problem Luis created (not another p*rn site, no!) 2 kinds of game he called symmetric and asymmetric game. One of the game he has created basically just ask people to put on words that describe images. The results are then used by google images search to help them provide correct images to people when they are searching for, say, 'cat'.
But that would take long time, right? you may ask. Well, you gotta see Luis' presentation to learn much about interesting statistics, shortly I can only say it does worked!
You can see, and try, the games at www.espgame.org and www.peekaboom.org. I just checked espgame site, and there it shows that they have been giving > 33 million image labels since October 2003. Wow.
Luis himself seems doesn't just stop there. He continued to do research on how to solve more interesting problems.
I noted one interesting question in the video: Would it be possible that for every boring job we do, we could create fun ways to do it? Of which Luis answered, I don't know, it would be great if we could figure out how to do this for every problem. But yes, this is an open problem.
An interesting open-problem,
So let's use up those wasted computation cycles :-)
He start by telling the story of CAPTCHA (Completely Automated Public Turing test to tell Computer and Humans Apart). We all should by now probably know about CAPTCHA, and how does it help preventing spammers to get millions of free email account.
CAPTCHA is a computer-generated product that currently only human can solve but not other computer program. See the paradox? This is the basic idea of Human Computation. There are still areas where humans do better job than computers. And we do it well, way better than computers currently capable of.
What some of us may not know is that spammers have found a way to solve CAPTCHA. There are 2 basic ways (I myself knows about the first way, but only learn the second way from Luis' presentation).
First, spammers pay people to actually solve CAPTCHAs. But this has turn out costly for them. Second clever thing they done, is by using p*rn-sites. Spammers create p*rn sites that when visitors want to see more, they would have to enters words of a CAPTCHA (which behind the screen will then be submitted to Yahoo's email registration form). Being passionated p&rn lover, more often than not, those visitors will as fast as the can typed in the CAPTCHA :-)
The second example taught us a very interesting idea (in my words): We can use human computational power to solve computer-unsolved problems in a way that's actually "fun" for them, plus we don't need to spent a lot of money to pay them :-) And to know just how many wasted cycles of human computation is, Luis gave a figure: In 2003 estimated that there are > 30 billion man-hours wasted on playing the game Solitaire.
So what kind of problems that we may solve? Turn out there are a lot of them. Luis gave example of giving better descriptions to images on the web, so that a search for images could give better results.
To solve that problem Luis created (not another p*rn site, no!) 2 kinds of game he called symmetric and asymmetric game. One of the game he has created basically just ask people to put on words that describe images. The results are then used by google images search to help them provide correct images to people when they are searching for, say, 'cat'.
But that would take long time, right? you may ask. Well, you gotta see Luis' presentation to learn much about interesting statistics, shortly I can only say it does worked!
You can see, and try, the games at www.espgame.org and www.peekaboom.org. I just checked espgame site, and there it shows that they have been giving > 33 million image labels since October 2003. Wow.
Luis himself seems doesn't just stop there. He continued to do research on how to solve more interesting problems.
I noted one interesting question in the video: Would it be possible that for every boring job we do, we could create fun ways to do it? Of which Luis answered, I don't know, it would be great if we could figure out how to do this for every problem. But yes, this is an open problem.
An interesting open-problem,
So let's use up those wasted computation cycles :-)
Tuesday, November 13, 2007
Thursday, October 25, 2007
a Good Software?
Joel (the on-software guy) has been doing marketing tours of his company's FogBugz version 6, a project management with all kinds of "thingamajiggies" software.
I noted two things:
First, his marketing strategy is very good. By giving free product demo and presentation all over the world.
Second, FogBugz 6 is a _very-very_ good software. That's what I would think how all software with user-interfaces should behave. Four thumbs up for Joel and team.
Info: You could watch a video recording of Joel presentation in Austin here: http://joelonsoftware.com/items/2007/10/24austindemo.html
If you have slow internet connections or some crazy proxy that won't let you enjoy the video (it's 265 MB big), I've found out that you could wget the flash video here: http://media.fogcreek.com/Joel-Austin07.flv
Admittedly, I've learned a lot.
Update: I should have said I've learned at least 3 things, the other one is the Evidence Based Scheduling system FogBugz have. Always have wondered about tracking and estimating time on my projects (by projects I really mean those are all the things/stuffs that require me to actually DO them).
I noted two things:
First, his marketing strategy is very good. By giving free product demo and presentation all over the world.
Second, FogBugz 6 is a _very-very_ good software. That's what I would think how all software with user-interfaces should behave. Four thumbs up for Joel and team.
Info: You could watch a video recording of Joel presentation in Austin here: http://joelonsoftware.com/items/2007/10/24austindemo.html
If you have slow internet connections or some crazy proxy that won't let you enjoy the video (it's 265 MB big), I've found out that you could wget the flash video here: http://media.fogcreek.com/Joel-Austin07.flv
Admittedly, I've learned a lot.
Update: I should have said I've learned at least 3 things, the other one is the Evidence Based Scheduling system FogBugz have. Always have wondered about tracking and estimating time on my projects (by projects I really mean those are all the things/stuffs that require me to actually DO them).
Thursday, October 11, 2007
Back to Fitrah :-)
Please... forgive us.
Happy Ied Mubarak 1428 H.
may Allah always be in our path.
Here and after.
Arief - Rizka - Aga - Nadia - Ghifa
... and all the rest of the team...
Happy Ied Mubarak 1428 H.
may Allah always be in our path.
Here and after.
Arief - Rizka - Aga - Nadia - Ghifa
... and all the rest of the team...
64 (virtual) Processors on your desktop?
Taken from a great article by Ulrich Drepper on http://lwn.net:
"... Red Hat, as of 2007, expects that for future products, the “standard building blocks” for most data centers will be a computer with up to four sockets, each filled with a quad core CPU that, in the case of Intel CPUs, will be hyper-threaded. {Hyper-threading enables a single processor core to be used for two or more concurrent executions with just a little extra hardware.} This means the standard system in the data center will have up to 64 virtual processors. ..."
"... Red Hat, as of 2007, expects that for future products, the “standard building blocks” for most data centers will be a computer with up to four sockets, each filled with a quad core CPU that, in the case of Intel CPUs, will be hyper-threaded. {Hyper-threading enables a single processor core to be used for two or more concurrent executions with just a little extra hardware.} This means the standard system in the data center will have up to 64 virtual processors. ..."
Monday, September 24, 2007
Suspend-Resume in OLPC Laptop
Jim Gettys of X windows fame, share a story how he and OLPC team (CMIIW) got the laptop doing suspend and resume on hundred-thousand cycles.
Read about it here.
Read about it here.
Friday, September 21, 2007
To C or To C++
This one from Linus, surely is my kind of comment :-)
"C++ is a horrible language. It's made more horrible by the fact that a lot of substandard programmers use it, to the point where it's much much easier to generate total and utter crap with it. Quite frankly, even if the choice of C were to do *nothing* but keep the C++ programmers out, that in itself would be a huge reason to use C."
"C++ is a horrible language. It's made more horrible by the fact that a lot of substandard programmers use it, to the point where it's much much easier to generate total and utter crap with it. Quite frankly, even if the choice of C were to do *nothing* but keep the C++ programmers out, that in itself would be a huge reason to use C."
Indonesia Earthquake Informations
Just got this done 2 weeks ago, but only roughly. Then I've improve it by showing historical data also.
It's accessible on my website (http://arief-mulya.com/gempa.php).
All data are taken directly from BMG EWS server. I use wireshark to help me find out what are the protocols used to get the data. I don't think my implementation to have the data is yet the best way. And it doesn't currently handle warning cancelations informations (such as tsunami warning cancelations, etc).
Don't have the time to improve it again for now, but I've plan to put a "latest-only" link and rss feed also there. If anyone knows a free web-to-sms services, please let me know, I'd love to put the ability of giving sms warning there also.
It's accessible on my website (http://arief-mulya.com/gempa.php).
All data are taken directly from BMG EWS server. I use wireshark to help me find out what are the protocols used to get the data. I don't think my implementation to have the data is yet the best way. And it doesn't currently handle warning cancelations informations (such as tsunami warning cancelations, etc).
Don't have the time to improve it again for now, but I've plan to put a "latest-only" link and rss feed also there. If anyone knows a free web-to-sms services, please let me know, I'd love to put the ability of giving sms warning there also.
Monday, August 06, 2007
Linux Power Consumption
Intel has created powertop [www.linuxpowertop.org], a power consumption watch utilitiy that work very similar to `top` program but this is to watch processes that takes much of your laptop battery life.
In Debian unstable I could just apt-get install powertop to use it. It also has the ability to give many (useful?) suggestions how to save my power. After implement some of them, I get to save a few watts and extends my battery life.
What's also interesting to me, I've learn that accessing gmail from firefox seems to contribute a lot to my laptop power consumption. Heavy javascripts? Don't know, but this may results of me not standing by gmail all the time from now on.
In Debian unstable I could just apt-get install powertop to use it. It also has the ability to give many (useful?) suggestions how to save my power. After implement some of them, I get to save a few watts and extends my battery life.
What's also interesting to me, I've learn that accessing gmail from firefox seems to contribute a lot to my laptop power consumption. Heavy javascripts? Don't know, but this may results of me not standing by gmail all the time from now on.
Thursday, August 02, 2007
GNOME Online Desktop Project
An interesting project.
I think it suits me quite well.
Considering that I spent most of my computing time nowadays through a Web Browser (for all my application needs) and a Terminal (for programming and quick and dirty work).
I think it suits me quite well.
Considering that I spent most of my computing time nowadays through a Web Browser (for all my application needs) and a Terminal (for programming and quick and dirty work).
Next Step Plan
Your list of things TODO, is actually a list of things you _cannot_ / _will-not_ do.
Not until you actually plan what needs to be done for each of the mumble-mumble you identify as "TODO".
Not until you actually plan what needs to be done for each of the mumble-mumble you identify as "TODO".
Saturday, July 14, 2007
Apatar, F/OSS Visual Data Integration
About 5 years ago I turned my lazy brain on its wheels around to help me create decent parsers to processes GSM switches configuration and traffic data. Anyone has ever deal with those data must have known the fun we could have in interacting with them.
I was a fool back then (now is not any better, I've become more foolish then ever), not knowing how things worked and fueled by young-man's passsion, I decided to create them by myself, hand-written using C, the "best programming language" ever created by mankind. :-)
I didn't just jump to pure C, though, I remember that I had first thought to create them using bison and flex (UNIX techies: they are free-software implementation of yacc and lex). For this, I studied those wonderful things about language grammars, stomata, recursive descent, right-recursive, left-recursive, the Backus-Naur Form, and many other things that I've now -ofcourse- forgotten about :-)
Apparently my learning was not enough or I was just simply to stupid to grasp the whole concepts or both. I manage to create some parsers, but they turn out to be so complex for me that I couldn't easily debug and adjust them everytime there are errors or updates.
So I turn to C. Just C this time. I even bought Peter van der Linden Deep C Secrets book. A really great and recommended book for learning C. I have some good laughs from there, too. (This reminds me, where is that book now? Hello?)
From the book, I got valuable lessons about finite-state machines. I use this knowledge to create my parsers using C, and succeeded in doing so. I could even understands the code at the time, although since it has never failed me yet, I've also forgotten about it :-)
I do remember that those experiences was while thrilling and enjoyable at the same time also hair-pulling and nightmarish. Something that I'm not sure I wanted to do again if the fish is not worth it.
Nowadays, I learn that people use these things called ETL (Extract, Transform, Load) tools for the kind of work. And so many software vendors has it now. ETL tools that can, at least in theory, read any data in any format and transform it to any format you wanted.
Remembering my past experiences, I'm amazed and thrilled to know that there is also such a tool in F/OSS world called Apatar (http://www.apatar.com). Apatar even has a visual-drag-and-drop-touch-and-go-you-name-it ability to create ETLs that they called DataMap. Many DataMap are already available in their http://www.apatarforge.org forge-site.
If only I have known about these tools back then. May have saved me some sleepless night.
Or not.
I was a fool back then (now is not any better, I've become more foolish then ever), not knowing how things worked and fueled by young-man's passsion, I decided to create them by myself, hand-written using C, the "best programming language" ever created by mankind. :-)
I didn't just jump to pure C, though, I remember that I had first thought to create them using bison and flex (UNIX techies: they are free-software implementation of yacc and lex). For this, I studied those wonderful things about language grammars, stomata, recursive descent, right-recursive, left-recursive, the Backus-Naur Form, and many other things that I've now -ofcourse- forgotten about :-)
Apparently my learning was not enough or I was just simply to stupid to grasp the whole concepts or both. I manage to create some parsers, but they turn out to be so complex for me that I couldn't easily debug and adjust them everytime there are errors or updates.
So I turn to C. Just C this time. I even bought Peter van der Linden Deep C Secrets book. A really great and recommended book for learning C. I have some good laughs from there, too. (This reminds me, where is that book now? Hello?)
From the book, I got valuable lessons about finite-state machines. I use this knowledge to create my parsers using C, and succeeded in doing so. I could even understands the code at the time, although since it has never failed me yet, I've also forgotten about it :-)
I do remember that those experiences was while thrilling and enjoyable at the same time also hair-pulling and nightmarish. Something that I'm not sure I wanted to do again if the fish is not worth it.
Nowadays, I learn that people use these things called ETL (Extract, Transform, Load) tools for the kind of work. And so many software vendors has it now. ETL tools that can, at least in theory, read any data in any format and transform it to any format you wanted.
Remembering my past experiences, I'm amazed and thrilled to know that there is also such a tool in F/OSS world called Apatar (http://www.apatar.com). Apatar even has a visual-drag-and-drop-touch-and-go-you-name-it ability to create ETLs that they called DataMap. Many DataMap are already available in their http://www.apatarforge.org forge-site.
If only I have known about these tools back then. May have saved me some sleepless night.
Or not.
Wednesday, July 04, 2007
World's 3 Richest Men
I just got this from slashdot: http://seattlepi.nwsource.com/business/322239_richest04.html?source=mypi.
So, now the 3 richest man of the world are the ones who have fun in these fields:
1. Telecommunication Network
2. Software Engineering
3. Investment Business
So happens, those are the fields of my current interests.
Hmm. Should not be a bad path afterall.
Just need to make it works for me :-)
So, now the 3 richest man of the world are the ones who have fun in these fields:
1. Telecommunication Network
2. Software Engineering
3. Investment Business
So happens, those are the fields of my current interests.
Hmm. Should not be a bad path afterall.
Just need to make it works for me :-)
Sunday, June 24, 2007
Fast notes
It's been a long time, life's great but I've got many things to work on and so little time.
Classic, huh ?
Oh well, now is also only for passing by.
I need to note these important files/directory for me to handle an "unstable" Debian:
- /var/lib/apt/lists
- /var/lib/dpkg/status
- /var/lib/dpkg/available
Kept forgetting about them.
Classic, huh ?
Oh well, now is also only for passing by.
I need to note these important files/directory for me to handle an "unstable" Debian:
- /var/lib/apt/lists
- /var/lib/dpkg/status
- /var/lib/dpkg/available
Kept forgetting about them.
Wednesday, May 02, 2007
Menggambar garis miring di monitor

Perkara mudah kah?
Seperti banyak hal lainnya dalam dunia komputer, ini adalah perkara yang sepertinya mudah namun kenyataannya sulit sekali.
Untungnya sudah ada yang memikirkannya dan hasil formulanya disebut dengan Bresenham's algorithm. Hint: Go to Wikipedia for details.
Kalau liat gambar disamping itu, bisa liat kan ketika pada posisi tengah ternyata kita perlu mewarnai 3 pixel, instead of 2, untuk mencapai kemiringan yang diinginkan.
Dan ini baru bicara grafik raster. Kalau vector, wah gak kebayang dech.
Hiks.
Monday, April 30, 2007
RI-1 Gunakan 3G Video Conference

Dalam acara peresmian proyek percontohan pembangunan Rumah Susun Sederhana, Presiden Soesilo Bambang Yudhoyono yang berada di Pulo Gebang menggunakan fasilitas 3G video conference Telkomsel untuk tanya jawab dengan penanggung jawab proyek di empat titik lokasi berbeda, yakni di Cakung, Klender, Cipayung, dan Cawang.
Btw, blog ini dan blog lainnya dariku yang terkait dengan Telekomunikasi di Indonesia akan juga tersedia (mungkin lebih banyak) di http://arief.telkom.us yang keberadaannya dibantu oleh Mas Koen yang ruarrr biasa.
Wednesday, April 25, 2007
UNIX is Simple
... that's what Linus said in his "Just for Fun" book.
Come to think of it, I agree with him.
UNIX. Simple. Small. Beautiful.
But don't let it fools you,
Simplicity is a show-off for elegant ways in solving problems.
And you could only be elegant thru experience and wisdom.
To realise and accept that nobody's perfect, and that you _will_ made mistake.
Hmm, it reminds me of a saying:
"Nobody's perfect. I'm nobody. I'm perfect."
:-)
Come to think of it, I agree with him.
UNIX. Simple. Small. Beautiful.
But don't let it fools you,
Simplicity is a show-off for elegant ways in solving problems.
And you could only be elegant thru experience and wisdom.
To realise and accept that nobody's perfect, and that you _will_ made mistake.
Hmm, it reminds me of a saying:
"Nobody's perfect. I'm nobody. I'm perfect."
:-)
Subscribe to:
Posts (Atom)