May 30, 2020

Hand Carry Data Collecting Through Questionnaire and Quiz Alike Using Mini-computer Raspberry Pi


  • This paper was presented in The 4th International Mobile Learning Festival (IMLF) at Honk Kong, SAR China, on 10th June 2017.
  • I do not remember transferring copyright. If I'm correct, the copyright remained with both me "Fajar Purnama" the main author and IMLF2017 proceeding where I have the authority to repost anywhere and I hereby declare to license it as customized CC-BY-SA where you are also allowed to sell my contents but with a condition that you must mention that the free and open version is available here. In summary, the mention must contain the keyword "free" and "open" and the location such as the link to this content.
  • However, please also support IMLF proceedings by visiting their website where you can download not only mine, but all proceedings for free by clicking the download button and registering your email.
  • The presentation is available at Slide Share.
  • Abstract

    Conventionally data collecting through surveys or quizzes are usually done by distributing hard paper based questionnaires or by directly asking people themselves. With the invention of the Internet, the base of these methods changes to online. For example, in a high developed information communication technology (ICT) University, the authorized personnels sends emails to students to complete an online questionnaire resided on a certain website. However, in most cases on developing countries such as the ones resided in South East Asia, the people are already familiar with computer devices such as gadgets, laptops, netbooks, etc, but they do not have a reliable Internet connection. Therefore this work proposes a method which utilizes this situation that can improve the convenience of survey process for both the surveyors and participants. Since most people have gadgets, our method involves in providing a portable hotspot device for them to connect and access our local survey questionnaire website. This is possible thanks to the invention of credit card size computer such as Raspberry Pi. Like any other computer it can be filled with an operating system (OS), installed with a hotspot module and a webserver which are enough to conduct surveys or quizzes alike through wireless local area network (WLAN) except that the size is hand carry which is easier to carry than laptop. In this work the method is realized and was put to few trials. This research is more of mobile on surveyors’ or teachers’ side than mobile learning on students’ side.


    There are many forms of data collecting, for example questionnaires which their results are used to create statistical analysis like finding the students’ and teachers’ perspective of elearning like in some of our peers’ research Paturusi (2015) and Monmonthe (2016) where they are needed to determine the e-readiness in their respective researched Universities. In classrooms, quizzes are more used to access the knowledge of the students of which parts of the subject that were clearly understood and which parts were not. Quizzes also have other benefits like stimulating the learning process of the students which can guide them in learning the subject and help them in performing better in exams as discussed on McDaniel (2012) where they performed experiments on different type of quizzing such as repetitive quizzing with item identical to exams with only related to exams, and providing feedbacks after quizzes. Both questionnaires and quizzes serves as a purpose for information gathering.

    Unfortunately, these are not what wanted to be discuss here. What wanted to be discussed is the method or process of conducting the data collecting or survey. The methods of our peers were still quite conventional, by distributing paper questionnaires and recollecting them back, while the others uses online method that utilizes computer and Internet connection which is currently one of the easiest way. However in most developing countries such as in South East Asia, Internet connection is not well established The World Bank Group (2016), meaning that online survey is not the answer, like in Indonesia for example Kusumo (2012) which forces our peers to use the conventional method. However most people there are familiar and owns computer devices such as gadgets, androids, and iphones The World Bank Group (2016) and this research tries to utilize that situation which aims to be more convenient than the conventional method. Since computers are utilized the method will also have the advantage of online survey which is the convenience of having automated data collection Wright (2005).

    This topic can be said more of mobile on the surveyors’ or teachers’ side than the typical mobile learning on the students’ side. The method proposed is to use a portable server where the users’ computer devices can connect to and perform the survey there. The data obtained will be stored on that miniserver and later be extracted by the surveyors with ease, also it is possible to program a preprocessing on that miniserver which can make things more easier. This idea can easily be realized since the invention of a credit card size computer Raspberry Pi (there are other brands as well but for now this one is used). All that is needed is to program this Raspberry Pi by inserting an OS, installing a hotspot module where the users will connect through WLAN, and a local website for the survey materials itself. After this idea was realized, a small trial was conducted on few users. More importantly the advantages of this method was shown and discussed, on the other hand also the limitations of this method based on resource consumptions.

    Related Work

    There are other researches that had similar situation to this one where people have there own computer device but insufficient infrastructure in their respective places to connect to The Internet. Most of these researches shows making things portable as the answer. Here are some related works:

    • The work of Royyana (2010) proposed to make an online quiz to be portable where the students can take home using their computer devices and attempts them offline, later on the online system will be synchronized once the students come across a reliable Internet connection.
    • Kuziek (2016) proposed a method using Raspberry Pi to be able to conduct Electroencephalography (EEG) experiments outside the laboratory, since it is not easy to get the naturality of the occurrence when EEG experiments are conducted inside the lab.
    • An interesting work by Wittenberg (2015) proposes a use of keys which is a flash drive that contains a computer environment to run specialized software need for Computer Science courses. Since not many students are versed in programming, setting up their own environments, etc, flash drives already pre-installed with an OS, and softwares were distributed to students where the students can boot on any computer machines.
    • This one O'Connor (2011) is similar to this work where they use laptops for collecting data for home visitation program where normally they use paper based questionnaires and input them into the database later. Using a laptop saves from that trouble and other costly stuffs like printing papers. The result showed reduced cost in money and time. The difference between this work is that this work will use a smaller size computer and targeted for mass size survey.

    Materials and Methods


    The device used is a hand carry or a minicomputer which functions as a portable server. Table 1 shows the modules needed to execute the method on the next section and Table 2 is the specification of the minicomputer. Nowadays the price of Raspberry Pi ranges from $30 - $50. If not already owned items to configure the Raspberry Pi the following items can be purchased; high definition multimedia interface (HDMI) compatible display starting from $20, keyboard beginning at $5, mouse as cheap as $1, and power bank from $10.

    Table 1. Specification of the hand carry computer Raspberry Pi 2 Model B.
    A 900MHz quad-core ARM Cortex-A7 CPU
    1 Giga Byte (GB) Random Access Memory (RAM)
    4 Universal Serial Bus (USB) ports
    40 General Purpose Input Output (GPIO) pins
    Ethernet Port
    Camera Serial Interface (CSI)
    Display Serial Interface (DSI)
    Micro Serial Digital (SD) card slot
    Video Core IV 3D graphics cire

    Table 2. A list of modules necessary for the device. The items column is general while the materials column are like specific brands used that can perform the item’s function. The materials that can be used are not limited to these ones for as long as it can perform the functions.
    Items Materials Details
    Computer: Raspberry Pi 2 The minicomputer.
    Operating System: Raspbian Abbreviation for Raspberry Debian which is a Linux based OS for the Raspberry Pi itself.
    Hotspot Module: Hostapd This is an application to start the wireless interface to server as hotspot for users to connect.
    DHCP Server: Udhcpd Each connected devices needs to be assign an identity for the WLAN which is internet protocol (IP) address.
    Webserver: Apache2 The questionnaire will be a web based where the connected users will use their browser to access it. This is used to host a local website.
    Database Server: MySQL Using a structured query language (SQL) to store the data input by the users.
    Landing Page using DNS server: Dnsmasq and Iptables Normally the users have to be told the address of the website of the questionnaire, but with this the browser will automatically redirect to it.
    Survey Software: Limesurvey A content management system (CMS) used for online surveys.


    This work is designed to give convenience and mobility to the surveyors and teachers alike to do their desired task which for now limited to only getting responses from others, for examples conducting quizzes to assess the students’ knowledge, and surveying the crowds to know their perspective. With the situation of limited Internet connection, the modern online survey is unusable, but with many ownership of computer devices, an easier way than the conventional paper based questionnaire becomes available. That method is the use of hand carry computer which functions as a portable server to gather data inputed from other users’ or participants computer device which can be connected and functions as a client illustrated on Figure 1. When conducting surveys, it is no longer needed to handover paper questionnaires, but only ask the people to connect to the device and answer the questions from their gadgets. It can be applied by surveyors to gather data on institutions, teachers who are giving quizzes to their students, surveyors who gather data from home to homes, or even by random persons on crowds in the public whether for commercial or personal use. Unlike the paper based, processing can be task on the device which eliminates the needs to manually inputing and process the survey data afterwards which also means results can be obtained instantly and accumulatively.
    Figure 1. Illustration of using hand carry computer device to gather informations from other users inputed from their own computer device.

    As described in the previous sub section, the hand carry device used is a Raspberry Pi. Raspbian OS is then flashed into this computer which is a Linux based OS. Required modules can be downloaded and installed from The Internet which the Raspberry Pi can connect from the wired or wireless interface. The first modules needed are means to connect users to this Raspberry Pi through wireless connection which will use the one based on IEEE 802.11. They are Hostapd to run the wireless interface as a hotspot and Udhcpd to give IP address to the clients attempting to connect. The second modules needed are means to host the questionnaires, quizzes, or alike which is web based on this work. Apache2 as the web server to show the electronic questionnaire and MySQL as the database server to store the inputed data from clients. In this work, CMS Limesurvey is used to manage the local questionnaires, a sample screenshot is available on Figure 2. The third modules is not essential but eases the clients on the attempting process which are DNS server Dnsmasq to resolve all domain name to the local survey website and Iptables to redirect if the server is connected to The Internet, simply they function as a landing page in order to automatically direct clients to the questionnaire’s location when they open their browsers. If not, we have to tell them beforehand and let them find the location manually. With all of this done the Raspberry Pi will function as a hand carry server.
    Figure 2. Screenshot of attempting a survey using this method where it can be seen the client connects through a hotspot and received an IP address and the survey link resides on This was attempted on a laptop but it is not much different on mobile.


    Small simulations or trials were carried where there was 1 surveyor and he surveyed 30 people simultaneously. The surveyor is one of our lab members name Elphas Lisalitsa, and it is also fortunate that he never heard of Raspberry Pi when we approached him, which is good that the feedback of using this method can be more objective. It is conditioned that the surveyor knew how to do this method including using Limesurvey CMS. Before conducting the trial the surveyors are trained to do this method which fortunately only took one time that only last few minutes, since it’s quite unfair that the surveyor is versed in computer literacy, meaning that he is already skilled in creating questionnaire using document editor softwares and printing them. As he is already versed in using Microsoft Word, Libreoffice Writer, and similar softwares it is fair that he should also be versed in using our method. Imagine if a person does not know how to use Libreoffice Writer, he/she will take a long time to make this questionnaire, which is the same story of not knowing to do this method.

    The first experiment was the conventional one where they use paper based which the process includes writing 29 item questionnaires, printing them out, handing them to the participants, collecting them back, finally inputing them on the database. The second experiment is using our method which the process includes writing 29 item web based questionnaires, starting the device, asking the participants to connect and answer the questions. Due to some current limitations, field survey cannot be conducted but simulation with 29 virtual users provided and 1 real user attempted the survey on the Raspberry Pi. The same can be said for paper based where only distributing and collecting the papers are simulated with only single participants answering the questions. In the end the surveyor will be asked to compare the convenience of both methods. The questionnaire items were based on a survey of MOOC readiness survey in high schools and a national University in Mongolia containing 18 five point likert scale questions, 5 yes or no questions, 4 multiple choice questions, and 2 fill in question, also 633 words with 3628 characters. The survey was lead by our peer Otgontsetseg Sukhbaatar.

    For further simulation, stress testing was conducted to see if it could handle up to one hundred users. Unfortunately as stated before that the authors did not have a real testing ground, instead a simulation is carried using Funkload a web stress testing application (Delbosc, 2017) from another powerful computer to simulate a hundred virtual users at the same time accessing and conducting the survey. The application was able to record the activities on the browser starting from accessing the survey, answering questions, then viewing current results, and later to be replayed in benchmarking mode to include more virtual users. CPU and memory usage, and power delivery were also measured, but more importantly the response time.


    Data Collection Process
    Figure 3. Time consumption of survey process from preparation, responding, to post survey. For paper based method the preparation consists of question typing and question printing, responding consists of question distribution, question answering, and response collection, and Post Survey consists of response insertion. For hand carry server method the preparation consists of question typing with web delays, responding consists of server connection, question answering with web delay, and the advantage of this method is no need for post survey which the response already automatically inserted.

    Figure 3 shows the time consumption of both method showing little difference on preliminary and during data collection process. During the preliminary data collection process, the conventional method starts of by opening Libreoffice Writer, then writing 29 questions which took 33 minutes. Next printing the questionnaires of 3 pages for 30 people using OKI C332 fast printing machine which took as quickest of a second per page and everything took roughly 1 minute and 30 seconds assuming that it had the capabilities of automatic stampling. Using old printers may take much longer. Also the more the paper the heavier the weight, while Raspberry Pi only weights 45g.
    Figure 4. Time consumption captured during creating survey, and attempting survey on Raspberry Pi. Idle time can represent the time taken for typing, choosing, etc (manual labor), while others are web delay such as time to load a page and time to submit forms.

    Making questionnaires on Raspberry Pi solely depends on what application was used, on this case is using Limesurvey LMS. The time consumptions can be divided into two which are typing the questions and delays from the web system with detailed data showed on Figure 4. Using developer tools available on all browsers the process of questionnaire creation can be monitored in detail. It can be summarize that delays from the web like loading and scripting took a minute and 28 seconds while typing the questions itself took 34 minutes and 27 seconds. For paper based the issue is the needs to produce hard copy which contributes time needed for printing, while for this method depends on the hardware and software capabilities of the server and/or client if chosen to work remotely. With greater capabilities it can lessen the web delays such as loading page, and vice versa that more lags will occur with lower capabilities.

    For data collection process, it is the manual labor that is needed to be worried for paper based method which are distributing questionnaires and collecting them back while for Raspberry Pi based is dependent on its computer capabilities where the more the user, the more its performance degrades, more details are discussed on next subsection, also the capabilities of the client’s computer device influences. For paper based, distributing questionnaire took a minute 15 seconds and collecting them back took a minute 10 seconds. For this method the time to connect is a minute and 2 seconds and the web delay is 11 seconds tested for one user with 29 virtual users logged in (this result is highly related to Figure 6). As for answering the questions itself there is little difference where for paper based took 2 minutes and 54 seconds while for this method took 2 minutes and 59 seconds.

    Finally the post data collecting process is where the advantage of this work’s method was shown. An extra process will have to be taken if using the conventional method which is inputting the data to the database. On Figure 3 is assuming the fastest semi-automatic way using machines which are a scanner to scan the answers and optical character reader (OCR) to read the answers to be automatically put into the database like on English tests or national examination tests which took 7 minutes and 30 seconds for 90 pages of responses (3 pages multiplied by 30 people), with our scanner Epson ES-H300 was able to handle 5 seconds per page. Thought most surveyor does not have this technology and manually types them one by one which can take a lot more time, also usually two people are assigned doing the exact same thing for which in the end their answers to be cross checked with each other to mitigate human errors. Note that this have not include generating graph like analysis.
    Figure 5. Data in form of bar graph and pie chart was shown the instance the hand carry server received the responses. Only 6/29 item result shown here since it is too much to show all.

    Even so the hand carry server method surpasses those methods (whether manual or using machines like scanner) that can input and generate analysis with graphs the instance the participants answers the questionnaire. This made clicker possible to be implemented which is like polls on television shows. The page on Figure 5 showing the statistic have to be refreshed manually everytime to show latest results but this depends on the services provided by the LMS, though a bit implementation of asynchronous JavaScript and XML (AJAX) or the newest method JavaScript Object Notation (JSON) can make it more real time where the page updates automatically. In short this process can be a heavy burden for the surveyor if using paper based while using this method there is no need to go through this process which can save quite a lot of labor energy and time. In the end, the total time consumption on Figure 3 is shorter because for this method because it does not need to go through post data collecting process.

    Device’s Performance Measurement

    As said on the previous section the authors currently unable to conduct larger field testing, therefore a simulation was done instead using Funkload to simulate up to a hundred virtual users conducting this survey. From Nah (2007) a tolerable waiting time for information retrieval is approximately 2 seconds, and from Baily (2001) around 5 seconds is still ok, and 10 seconds is the maximum. For this work 10 seconds response time was taken as the maximum limit.
    Figure 6. Response time of simulated survey process from participants side ranging up to a hundred virtual users for left images and up to a thousand virtual users for right images, top accessing survey page, middle conducting survey from answering questions to submitting answers, bottom viewing survey results.

    Figure 6 showed the response time when 1 up to 100 virtual users attempted the survey. This can be said the worst case scenario since the users access the survey at the exact same time meaning all questions multiplied by up to 100 was loaded and all answers multiplied by up to 100 was submitted instantaneously. It is called worst case scenario since loading and submission at the same time almost never happen, in real scenario is a random probability which the load is always lighter. Because of this the data obtained was quite unexpected showing that it was too much to handle 100 virtual users simultaneously loading and submitting 30 questions (extra fake question to round the number) and answers as described of the questionnaire on the previous section. Therefore more experiments results with fewer questionnaire items were added which are 5, 10, and 20 items.

    For the real case of 30 items, if guaranteed below 10 seconds response time is seek then 10 users at a time is the maximum, if average of 10 seconds is still okay then it can handle up to 30 users (matches quite well with Figure 4). If longer time is alright then it can take up to 85 users before failure occurs and finally the service broke after 90 virtual users where restart of web and database server was required. Though fewer questionnaire items allows faster response time. For items of 20, 10, and 5 the maximum of 10 seconds occurred respectively at 15, 25, and 30 virtual users, while average of 10 seconds occurred at 45, 70, and 100 virtual users. Why does the number of items relates to response time? Because the user will have to load the items on the web browser when attempting the survey. To be more specific, the user requests and the web server transmits, and the more the items, the more transmission took place. Also after the attempt the users will have to send its response where the more the items the more the responses that must be sent. Again Figure 6 showed the worst case where all users requests all the items and returns all its responses at the same where this case is almost unreal. Therefore more user capacity might actually be available, but referring to the data as the limit may proof reliable judgment. In short it is guaranteed.
    Figure 7. CPU and memory usage of top survey creation, bottom survey attempt with additional 29 virtual users.

    To get the CPU and memory usage an application called Vmstat is used and ran every seconds printing the current CPU and memory usage. The method of calculation was how much of CPU and memory was free differentiate from the total CPU and memory available. Figure 7 showed the measurement that during survey creation the CPU usage was below 40% and memory usage was below 500 MB. It is expected less resource is used since only one user is creating the survey. However during survey attempt the CPU usage was mostly above 80% and memory usage was mostly above 600 MB, which is because 30 users are attempting at the same time with questionnaire of 30 items. The explanation is almost the same as response time that more computer resource is needed to allow more user attempts and more questionnaire items.

    The energy consumption is measured based on how much was consumed on the power bank. The powerbank has a specification of 20000 milliampere hour (mAh). After going through all the process on Figure 3 the percentage showing on the powerbank’s monitor drops from 100%-97% meaning only using 3% and the calculation is on Equation 1 showing 0.6Ah in 39 minutes. In an hour it should use 0.92Ah which the result is quite matching to the experiment done on “Raspberry Pi FAQs” (2016). The voltage was 5 volts (V) which the current delivery will be 0.92Ah multplied by 5V and will be 4.6 watt hour (wh). In the end power delivery is not a big deal.

    Conclusion and Future Work

    This work shows that the hand carry server method was more convenient than the paper based method. For the time consumptions comparing the two methods, this work’s method’s showed faster result since less manual labor are done. The advantage of this work’s method is visible in post data collecting process which can provide automatic insertion of responses to the database and instantly display them in statistics in realtime. Although that it provides great convenience there are limitation due to the resource available on the hand carry server. With 5, 10, 20, and 30 number of questions on the survey, it can be guaranteed that the response time will not exceed 10 seconds if users does not exceed respectively 35, 25, 15, and 10. If beyond that is still tolerable, then the simulation showed that the average response time of 10 seconds occurred at number of virtual users of 100, 70, 45, and 30 when there are 5, 10, 20, and 30 items in the questionnaire. It’s the same for CPU and memory usage that it was mostly consumed when number users is above 30 with each loading 30 items of questionnaire. If it is just a class with average number people the device can handle it.

    This work showed only some applicative which introduces the idea and yet to be implemented. There are room for improvements on its data structure, and performance. Another issue which is yet to be discussed is the privacy and reliability, for example its susceptibility to data loss and failures. Additionally synchronization may also be discussed starting from the hand carry device to the main server, then between hand carry devices if more than one is used for a survey, like how to combine the data together. In the future will also try using other more popular hand carry devices such as mobile phone whether it is possible for it to function as a portable server such as the one on this work and compare those ones with this work.


    Part of this work was supported by JSPS KAKENHI Grant-in-Aid for Scientific Research 25280124 and 15H02795. The authors would like to thank Elphas Lisalitsa for willing to be a subject for the trial as a surveyor, which he was burdened with doing two kind of surveys that were paper based method and hand carry based method, starting from creating the questions, collecting the data, and inputing the data. The authors would also like to thank Otgontsetseg Sukhbaatar for providing us her questionnaire items and informing us about her survey experience using paper based in some high schools in Mongolia.


    • Bailey, B. (2001). Response Times. (2017, March 03). Retrieved from
    • Delbosc, B. Funkload documentation contents. (2017, March 03). Retrieved from
    • Ijtihadie, R. M., Chisaki, Y., Usagawa, T., Cahyo, H. B., & Affandi, A. (2010). Offline web application and quiz synchronization for e-learning activity for mobile browser. TENCON 2010 - 2010 IEEE Region 10 Conference. doi:10.1109/tencon.2010.5685899
    • Kusumo, N. S. A. M., Kurniawan, F. B., & Putri, N. I., “Learning obstacle faced by indonesian students,” in The Eighth International Conference on eLearning for Knowledge-Based Society. Bangkok, Thailand. Retrieved from
    • Kuziek, J. W. P., Shienh, A., & Mathewson, K. E. (2017). Transitioning EEG experiments away from the laboratory using a raspberry pi 2. Journal of Neuroscience Methods, 277, 75–82. doi:10.1016/j.jneumeth.2016.11.013
    • McDaniel, M. A., Wildman, K. M., & Anderson, J. L. (2012). Using quizzes to enhance summative-assessment performance in a web-based class: An experimental study. Journal of Applied Research in Memory and Cognition, 1(1), 18–26. doi:10.1016/j.jarmac.2011.10.001
    • Nah, F. F. (2004). A study on tolerable waiting time: how long are Web users willing to wait? Behaviour & Information Technology, 23(3), 153-163. doi:10.1080/01449290410001669914
    • O’Connor, C., Laszewski, A., Hammel, J., & Durkin, M. S. (2017). Using portable computers in home visits: Effects on programs, data quality, home visitors and caregivers. Children and Youth Services Review, 33(7), 1318–1324. doi:10.1016/j.childyouth.2011.03.006
    • Paturusi, S., Chisaki, Y., & Usagawa, T. (2015). Assessing lecturers and student’s readiness for e-learning: A preliminary study at national university in north Sulawesi Indonesia. GSTF Journal on Education (JEd), 2(2), . Retrieved from
    • Raspberry Pi FAQs - Frequently Asked Questions. (n.d.). (2017, February 26). Retrieved from
    • T, M, M., Win, T., Oo, M, Z., & Usagawa, T. (2016). Students’ E-readiness for E-learning at Two Major Technological Universities in Myanmar. In Seventh International Conference on Science and Engineering (pp. 299-303). Yangon, Myanmar
    • The World Bank Group. (2016). Internet users (per 100 people). (2017, March 06). Retrieved from
    • The World Bank Group. (2016). Mobile cellular subscriptions (per 100 people). (2017, March 06). Retrieved from
    • Wittenberg, L. (2015). MC-Live. Proceedings of the 46th ACM Technical Symposium on Computer Science Education - SIGCSE ’15 (pp. 421-423). Kansas City, Missouri, USA. doi:10.1145/2676723.2677216
    • Wright, K. (2005). Researching Internet-Based Populations: Advantages and Disadvantages of Online Survey Research, Online Questionnaire Authoring Software Packages, and Web Survey Services. Journal of Computer-Mediated Communication, 10(3), 00-00. doi:10.1111/j.1083-6101.2005.tb00259.x


    May 29, 2020

    Rsync and Rdiff Implementation on Moodle's Backup and Restore Feature for Course Synchronization over The Network



    E-learning has been widely implemented in educations system. Most higher institutions have applied Learning Management Systems (LMSs) to manage their online courses, with Moodle as one of the most favored LMS. However on the other side creating a well designed and written course remains problematic for teachers. That's why the community encourages them to share their courses for others to reuse. The authors or teachers then will continuously revise their courses, that will make subscribers to re-download the whole course again, which will soon lead to exhaustive network usage. To cope with this issue a synchronization model of course's backup file is proposed, retrieving the differential updates only. This paper proposed the synchronization of the existing backup and restore features. The file synchronization is performed between course's backup files based on rsync algorithm. The experiment was conducted on virtual machine, local network, and public network. The result showed lower network traffic compared to the conventional sharing method just like our previous synchronization method. However unlike the previous one this method had two other additional advantages which are the flexibility to control the synchronization content and compatibility to all versions of Moodle.


    It is very common today to deliver education using electronic devices referred as e-learning. An advance application system that could manage e-learning known as LMS are widely use in higher educations. Modular Object-Oriented Dynamic Learning (Moodle) is one of the most popular and preferred LMS to deliver courses online. Many higher institutions in one of the author's country origin had implemented Moodle as their LMS [1] and discussed the problems that had been faced by the country's students. The authors on [2] have investigated the readiness of elearning implementation in Sam Ratulangi University. Implementation of mobile learning on GPRS network has been assessed in [3]. With many research on e-learning have been initiated, thus it's likely to see more Universities will implements e-learning soon. No doubt that the students are fortunate being given more flexibility. With just a computer device and Internet connection they are able to attempt these online courses without being limited by the boundaries of place and time. It's also very flexible on the teacher's side where they could prepare their courses before hand and give feedbacks to students on their leisure time.

    However designing and writing a good content may not be easy. It takes experiences and time to make a well designed and written one. Some special contents may only be correctly written by Professors. For this occasion Moodle encourages course sharing as stated in [4]. There are many other sites that provides backups of courses deployable on Moodle. As time passes another problem was encountered, that constant revision will inevitably occur when perfecting a course. In addition with today's multimedia technologies, for example the course's creator might consider adding videos on their courses, which makes it common to see a very large backup course in terms of filesize. The problem became more seriously as the survey result on [1] on 10 different universities in Indonesia shows that Internet connection as one of the major obstacles faced when implementing e-learning.

    To overcome the constant revision on the course contents and Internet connection problem, the work in [5] proposed course content synchronization. With this method there's no need to redownload the whole course whenever it is revised, but retrieve the revised part only. The application was created for Moodle version 1.9, and therefore it is needed to develop another one that is compatible for later version of Moodle as the next work on [6]. Those previous methods converts the course's database and directories into blocks and calculate the difference remotely between the outdated and latest course. In other words the previous application also handles the exports and imports of courses. This leads to an issue where a new application needs to be created everytime the structure on Moodle changes.

    Moodle already have a course backup and restore feature and therefore it's better to let Moodle handle that part and only focus on the synchronization. This will lead to an application compatible with all versions of Moodle. Also the existing feature provides more flexibility of what contents to be synchronized. With that this paper proposed a file synchronization between course's backup archive based on rsync algorithm that can calculate the difference of a files remotely. Figure 1 is the general framework of the proposed method where we only need to send a reference of the outdated backup archive and use it to create a patch. Thus the objective of this research is to develop a course synchronization application that is compatible with all version of Moodle.
    Figure 1. Course Synchronization Mechanism

    Related Work

    Course Sharing

    The introduction of the term massive online open course (MOOC) was the starting point where lots of online courses became open via web and allows unlimited participants. As for Moodle's case it is the teaching with Moodle MOOC [4] on Moodle HQ. Thousands of educators from around the globe have taken this MOOC and introduced to Moodle both as a user and as a course creator. It is still running periodically up to today. The participants are encouraged to share their courses on [7]. On that website visitors may try online courses or download them as .mbz format which is an output from Moodle's course backup and restore feature, and that is not the only website that has online course sharing.

    Course Synchronization

    As the authors on [5] wanted to implement distributed LMS for higher institutions in Indonesia, using their proposed method to distribute courses, was not entirely possible due to the band limited network connection or low capacity of Internet connection. When facing with education's curriculum, developing online courses takes continuous and countless revisions. This forces redistribution of the courses again and it heavily burdens the network capacity.

    The general framework of the previous synchronization method on both master and slave LMS side consists of Moodle table and synchronization table which was a conversion of Moodle table into blocks containing sets of ID, hash, and version information. It is between these 2 synchronization tables that the synchronization occurs. At first a version matching takes place. If the slave side is outdated, block matching takes place. If new informations exists on the master LMS, than that information will be added to the slave LMS, the instruction will be marked as "append". If informations on slave LMS doesn't exist on the master LMS then it will be deleted, thus the instruction will be marked as "delete". Finally if informations exist on both sides but different mapping, the instruction will be marked as "update". Overall the synchronization has three main steps. Other than the database, this applies to the course's directory as well. With that algorithm a standalone application was written in PHP, and compatible with Moodle version 1.9. The experiment was conducted between Institut Teknologi Sepuluh November (ITS) Surabaya, Indonesia, and Kumamoto University, Kyushu, Japan, and showed a low network traffic usage.

    File Synchronization

    The courses are shared as a backup archive in .mbz format and our method applies remote file synchronization on the transmission process, by utilizing rsync algorithm. The common file patching system needs the two files, i.e. an unrevised file and a revised file on the same system in order to create a patch for the previous version file. Uniquely rsync can perfom this remotely. Suppose that there are two LMSs, one is on the master side and the other is on the slave side. The masterside has the latest backup fileα while the slave side has the outdated backup fileβ. Based on [8] it is possible to updateβ to the latest revisionα with the following steps: (1) the slaveside splitsβ into series of non-overlapping fixed-sized blocks that had the same size, with the last block may have the same equal size or smaller, (2) a weak “rolling” 32-bit checksum and a strong 128-bit MD4 checksum, total 2 checksums are calculated for every blocks inβ, (3) the checksums are sent to the master side, (4) the master side searches α to find all blocks at any offset that have the same weak and strong checksumas one in the blocks of β, and (5) the master side sends a sequence of instructions to the slave side to construct a copyof α which can either be instructions refering blocks on β or data retrieved fromαthat does not match on any blocks on β.

    The name rsync itself is an application already installed in most Linux distribution. It is said on the manual page [9] as a fast extraordinarily versatile file copying tool that could replace conventional copying because it sends not the whole file but the difference between existing file. On this paper thought will be using rdiff, it is an application to generate difference between two binary files based on rsync algorithm. Basically it is an rsync implementation but gives more control than the existing rsync application. Rdiff is part of the package librsync [10]. Another application that will be used is rdiffdir, since the course's backup file is an archive. Rdiffdir is directory synchronization version of rdiff which is included in duplicity package [11].


    Backup and Restore Feature
    Figure 2. Screenshot of Course Backup Option

    Moodle has a course backup and restore feature that could do backup on a course into .mbz format. Users with previleges are given almost full control of what to backup from the course. Starting from whether to include users, anonym users, or no users at all, until backing up full content or certain parts of the contents only. This can be shown from a menu screenshot on Figure 2, and Figure 6 which is also our course design that shows capability of choosing certain sections to backup. In addition the restore feature gives the same menu. From Moodle's documentation [12] is also possible to alter the backup file for advance used.

    Synchronization Method
    Figure 3. Proposed Synchronization Model

    As stated on the previous section the experiments uses rdiff rather than rsync directly because it's still not common sharing backup course over rsync daemon or SSH, but very common over hyper text transfer protocol (HTTP). The slave side will generate a signature file of its course's backup archive and sends it to the master. The master side will use the received signature file and its course's backup archive to compute the delta file which can also be said as a patch file for the slave side course's backup archive. The master side will return a delta file to the slave side, and the slave side will generate the latest version of the course's backup archive. Overall it can be illustrated on Figure 3.
    Figure 4. Contents of Moodle's Course Backup Archive

    There will be two kinds of synchronization demonstrated. One will directly synchronize the backup archive using rdiff, and the other one will synchronize each file inside the backup archive recursively using rdiffdir. Unlike the first one which is purely binary file synchronization master's and slave's side course backup archive, the second one is more to course synchronization. The inside of the course's backup archive can be seen on 4. The "activity" folder contains forums, lessons, and quizzes alike. The "course" folder contains more of the course's settings. The "files" folder contains materials uploaded for the course, and the "section" folder defines each section on the course. Rdiffdir will recursively perform rdiff on those files. The result of rdiffdir is shown on Figure 5 where the difference of each file resides on the "diffs" folder, new added files on master side on the "snapshots" folder, and instructions to delete files that was deleted on master side on the "deleted" folder.
    Figure 5. Contents of Delta Archive Produced by Rdiffdir


    The experiment uses the main author's own developed course in Moodle version 3.0 as a material which has three large sections (topics) as seen in Figure 6. We also made the course available on [13], by login as username "teacher" and password "teacher". The experiment has seven scenarios where scenario 1 without sychronization and the others with synchronization as follows: (1) retrieving the whole course's backup file (conventional sharing), (2) large content addition on the master side (slave side only have 1 section), (3) medium content addition on the master side (slave side has 2 sections), (4) small content addition on the master side (adding an url module), (5) small change on the master side (modifying a text on one of the course outline module), (6) section order change on the master side (section 2 shifts to section 1, section 3 shifts to section 2, and section 1 shifts to section 3), (7) no change on the master side. Moreover the scenarios are conducted on 3 situations: (a) local machine and virtual machine, (b) local area network (LAN), and (c) public network on [14]. The local machine acts as the slave side while the other as the master side. Very simple php scripts are written to perform the synchronization as seen on illustration on Figure 3. Then the total sent and received traffic is measured using a packet capture tool Wireshark that will be discussed on the next section.
    Figure 6. Course's Design


    The first subsection Demonstration shows that the developed application utilize the output of Moodle's course backup and restore feature. Unlike the previous applications on [5] and [6] they are not responsible for exporting and importing courses, but rely on the internal feature in Moodle. This makes this paper's synchronization application compatible with existing and upcoming versions of Moodle. The second subsection Measurement Result shows that the application functions as a synchronizer like the previous applications on [5] and [6] by showing network efficiencies during transmissions.


    We made the PHP scripts available on [15]. The first draft developed has given a feature to the users on both master and slave to dump their own backup course archive in .mbz format. What information existed on the backup archive depends on what options are used on Moodle's backup and restore feature. We utilize common PHP file upload script that can be found in many tutorial on the web, except for this experiment the file will be automatically renamed into "backup.mbz". The demonstration that is shown on this section is for scenario 2, Figure 7 is the console for both master and slave LMSs to initially dump their backup course. As seen on the slave side the outdated "backup.mbz" file has a size around 16 MB where it only contains the first section of the course on Figure 6 (a).
    Figure 7. Template of File Synchronization Console

    The next step should be clicking the update button. The update button contains instruction to generate a "backup.mbz.sig" signature file from "backup.mbz" archive using the rdiff command, then send the "backup.mbz.sig" to master LMS url stated on the script written in curl PHP. The script to accept the file on the master LMS (the same common upload script in PHP) activates and do an extra instruction written to generate a delta (patch) file, with "backup.mbz.sig" and the master side's "backup.mbz" as inputs. The next step is to send the generated patch file "" to the slave LMS. For that we invoke a script on the slave LMS to download the "" written in curl PHP. On that script also contains instruction to backup the previous "backup.mbz" into "backup.mbz.backup" and apply patching using rdiff command to update the "backup.mbz" using "" as input. Finally Figure 8 shows the updated "backup.mbz" that has a new file size of 30 MB which includes all contents as seen in Figure 6. It is also shown that the "backup.mbz.sig" has a size around 16 kB and size of "" is around 23 MB. The overall process is then repeated for each scenario.
    Figure 8. File Synchronization Console After Update Process

    The second draft is similar to the first one except it implements rdiffdir. It shows signature file around 1.5 MB and delta file around 16 MB for scenario one. During the synchronization process the "backup.mbz" archive on both master and slave side are extracted into a folder named "backup". Starting on the slave side rdiffdir recursively generates signatures for each files on "backup" and stored it as an archive "backup.sig". The "backup.sig" is then sent to the master side and to be used as a reference to recursively produce deltas for each file on the master's side "backup" folder and store the deltas into an archive "". Next the "" is sent to the slave side and patch the "backup" folder, and finally recompressed into an archive "backup.mbz".

    Measurement Results

    The experiment was conducted by sending the signature file which influences the outgoing network traffic and retrieving the delta file which influence the incoming network traffic.

    The first experiment synchronizes the course's backup archive directly with rdiff on Figure 9 and the second experiment synchronizes each files contained within the course's backup archive with rdiffdir on Figure 10. The signature file was roughly produced around 200 kB and the delta file was around 20 MB. The first scenario (without synchronization) downloaded the whole course's backup file which had a file size around 30 MB, and the other scenarios (with synchronization) downloaded only the difference generated by rdiff. The overall result shows that using the proposed method is more efficient than doing the conventional way (scenario 1). On this case the slave side consumes total amount of traffic around 30 MB when not using synchronization, and consumes total amount of traffic around 20 MB when using synchronization. The proposed method proves that there is an efficiency of 10 MB of network capacity in term of bandwidth. For scenario 2 and 3 the outdated courses have a considerable amount of difference between the latest course and the results proves that it is very beneficial for this case. For scenario 4, 5, and 6 the outdated courses have a very few differences between the latest course, but the result shows around 20 MB of network consumption which is very high for this case. This is due to synchronizing while both archives are still compressed.
    Figure 9. Sent and received network traffic of direct backup archive synchronization

    The second experiment on the other hand decompresses both archives and synchronizes each files within which is more accurate for course synchronization. Scenario 4, 5, and 6 only makes small changes on the course's contents which made the incoming network consumption also small, around 1.5 MB. It's a very large efficiency compared to the first synchronization experiment, although the outgoing traffic increases which is due to high number of signature files. Either way both experiment results are better than without synchronization process. The last scenario shows very low traffic due to the course's backup file on the slave side is up to date with the master side, so no update is required. Since the measurement is based on the outgoing and incoming traffic it is logical that the public network shows a slightly higher traffic than between virtual machines and on local area network.
    Figure 10. Sent and received network traffic of recursive file synchronization

    Conclusion and Future Work

    Like the previous method of course synchronization the proposed method of rdiff and rsync utilization for backup archive both in master and slave sides saved the network consumption for the course sharing using Moodle, except there were two other merits than to the previous method. The first one was the flexibility to configure the course's contents to be synchronized, and the second one was time efficiency since no adaption process of application of the proposed method was needed when the version Moodle changes, however both of them were not fully demonstrated on this paper. Therefore in the future we will further develop its compatibility and demonstrate on all version of Moodle and other LMSs. The method also gives possibility to develop partial course synchronization.


    Part of this work was supported by JSPS KAKENHI Grant-in-Aid for Scientific Research 25280124 and 15H02795.


    1. N. S. A. M. Kusumo, F. B. Kurniawan, and N. I. Putri, “Learning obstacle faced by indonesian students,” in The Eighth International Conference on eLearning for Knowledge-Based Society, Thailand, Feb.2012.Figure 10. Sent and received network traffic of recursive file synchronization
    2. S. Paturusi, Y. Chisaki, and T. Usagawa, “Assessing lecturers and student’s readiness for e-learning: A preliminary study at national university in north sulawesi indonesia,”GSTF Journal on Education(JEd), vol. 2, no. 2, pp. 1–8, 2015.
    3. Linawati, “Performance of mobile learning on gprs network,”Teknologi Elektro Journal, vol. 11, no. 1, pp. 1–6, 2012.[4] M. Cooch, H. Foster, and E. Costello, “Our mooc with moodle,” Positionpapers for European cooperation on MOOCs, EADTU, 2015.
    4. R. M. Ijtihadie, B. C. Hidayanto, A. Affandi, Y. Chisaki, and T. Usagawa, “Dynamic content synchronization between learning management systems over limited bandwidth network, ”Human-centric Computing and Information Sciences, vol. 2, no. 1, pp. 1–17, 2012.
    5. T. Usagawa, M. Yamaguchi, Y. Chisaki, R. M. Ijtihadie, and A. Affandi, “Dynamic synchronization of learning contents of distributed learning management systems over band limited network — contents sharing between distributed moodle 2.0 series,” in International Conference on Information Technology Based Higher Education and Training (ITHET),Antalya, Oct. 2013.
    6. (2016) courses and content. [Online]. Available:
    7. A. Tridgell and P. Mackerras, “The rsync algorithm,” The Australian National University, Canberra ACT 0200, Australia, Tech. Rep. TR-CS-96-05, Jun. 1996.
    8. (2016)rsync(1)–linuxmanpage.[Online].Available:
    9. (2016)librsync:rdiff.[Online].Available:
    10. (2016). [Online]. Available:
    11. (2015)Coursebackup.[Online].Available:
    12. (2016). [Online]. Available:[14] (2016).[Online].Available:[15] (2016) 0fajarpurnama0/file-synchronizer-online-course-sharing. [On-line]. Available:



    May 28, 2020

    Can I share my work after copyright transfer?

    Default Copyright

    A copyright is the right to copy an intellectual property. By default, the copyright belongs to the creator with the requirement that the creator's name is labelled on the intellectual property. Anyone else who wants to use or copy the work must have permission from the copyright holder. The copyright holder can also open the work by changing the right to creative common or give up the right entirely by labelling the work as public domain.

    Copyright Transfer

    A copyright transfer is transferring a copyright holding to another party. The main author loses the authority so why would anyone want to do this? Generally, for marketing, the author may not have the capability to sell their work. Therefore, they rely on publishers and depends on the contract, the author and publisher splits the profit.

    On the academia side, authors needs reputation where they will try to have their work published in top journals, proceedings, or reports. Why not do it themselves? Well, it is a big extra effort building the work's reputation and generally, researchers only wants to focus creating and writing, and don't want to be burden with anything else. Top journals and proceedings provides peer review that controls the quality and polishes works. They have great marketing, many audience, great quality, reputation and trust, wide network, many professionals, etc. If you decide to publish yourself, you need to build everything from a scratch.
    For example, when you publish your work to IEEE, you will be asked to transfer your copyright.

    After Copyright Transfer

    After copyright transfer, you lose right to the work. The copyright is now at the other party and they decide the permissions regarding your work. They can give you full permission but usually they only give you partial permission but you are still the author of the work eventhough you don't hold copyright anymore.

    Can you share your work? It depends on the party you give the copyright to. If you don't know, you better ask them. If they publicly state that they don't allow to share, you need to ask them for permission and negotiate.
    For example, IEEE may give you permission to share the accepted version your work with the requirement to state that it is copyright by IEEE and give the full information location of the published version. IEEE does not allow to share the published version.
    On the other hand, IEICE does not allow you to share the accepted version but allow you to share published version only. For both IEEE and IEICE, you are allowed to share on your personal websites or blogs.
    Best case, publishers may give you permission. They may agree to open the work after some time. Worst case, you are not allow to do anything at all and you have to ask for permission and negotiate everytime when you want do something. Therefore, check carefully before transferring copyright.

    Need Advice

    I will share my works on personal websites and blogs since they allow me too however their definition is vague. If it is a server that I have physical access to then it is strictly clear. However, if it is a server where the author is allowed to upload and delete files without the consent of others (e.g., a blog or the server of a university department, preprint server), on blogger, github, and publish0x I can post and delete as I want, but I don't own the platform and they may revoke my right such as being banned or maybe my understanding is wrong that whenever I post and delete is actually based on those platforms' consent. Please leave a comment if you understand.

    In my opinion, I think the message is, if the copyright holder requests to delete my post, I can delete immediately, and that is what matters. So, what happens if I am banned on those platforms? Will my posts be deleted or will they remain? If they will be deleted then I'm confident in posting. Please leave a comment if you know the answer.


    May 26, 2020

    Is Zero Electricity Cost Cryptocurrency Mining Possible? Solar Power Bank on Single Board Computers


    Fajar Purnama, Irwansyah, Muhammad Bagus Andra, and Tsuyoshi Usagawa


    • This paper was presented in The 14th International Student Conference on Advanced Science and Technology (ICAST) at Kumamoto University, Japan, on 29th November 2019 but was not published thus the copyright remained with me "Fajar Purnama" the main author where I have the authority to repost anywhere and I hereby declare to license it as customized CC-BY-SA where you are also allowed to sell my contents but with a condition that you must mention that the free and open version is available here. In summary, the mention must contain the keyword "free" and "open" and the location such as the link to this content.
    • This is the original version of the paper. Due to my laboratory demand to emphasize education and a limitation of 2 A4 pages, some aspect related to the title were replaced with educational topics. If you want to see the modified one, it is available at Reasearch Gate.
    • The presentation is available at Slide Share.


    Bitcoin reaches $10000 per coins again, other cryptocurrency coins’ value also drastically increases, but it does not mean that mining became profitable on personal level. The cost of electricity and internet remains a liability in households, but what if there is a method to zero that electricity running cost? The authors came up with an innovation of using solar panels to generate the electricity but even more so, a practical method that could easily be followed by average people. That method is the combination of solar panel, USB power bank, and USB powered computer devices which are usually smartphones and single board computers. The solar panel converts sun light into electricity and the power bank serves as the battery to store it which today’s available power bank is able to power USB powered computer devices. This article contains a mixed short discussion of economics, environments, and innovative technologies.


    It has been 11 years since Satoshi Nakamoto publish the bitcoin whitepaper [1]. Bitcoin made into the spotlight at the end of 2017 where the price of bitcoin peaked up to $20000 per coin. The bubble burst then which the price dropped down to $3000. At the writing of this article, the price soars once again to $10000. The rising price attracts many investors and the volatility attracts many traders. In other words, many people seek to own bitcoin and other cryptocurrency coins for profit.

    Originally, these cryptocurrency coins were not meant as an investment instrument but a novel method for electronic transactions. While common electronic transaction needs a third party like banks and any other financial institutions to verify the transaction, cryptocurrency coins do not need a third party. However, this is a discussion for another time due to the limited space of this article.

    Straight to the point, this article discusses methods to get profitability from mining. The technical detail is too much to be discussed on this article but financially, mining is the process of obtaining cryptocurrency coin by donating computational power to the network. Electricity cost is the biggest problem therefore, majority of miners seeks a renewable source of energy such as hydro, solar, and wind [3]. This article implements the solar energy for electricity generation but different from other, this work is scaled to household size and primarily target general public. Since the targets are households, the objective of this work is to assemble a solar powered mining machine where the materials are easy to get and the methods are easy to follow. This article innovation is a solar power bank USB powered computer devices, due to limited space of this article, only a single board computer brand Asus Tinker Board (ATB) is demonstrated. Further discussion is about how profitable this innovation is.

    Materials and Method

    Table 1. Materials necessary to execute the concept of this work.

    Materials Specification Price
    Solar Panel 20W, 5V, 10 .56 cm2 ~ $ 15
    Power Bank 5V, 1-2 A, 20 Ah ~ $ 20
    Two Type C USB Cable ~ $ 5
    Internet Connection ~1MBps (3GB quota) ~ $ 8.95
    USB Computer Devices Asus Tinker Board (ATB) ~ $ 50
    ASIC USB (additional) Futurebit Moonlander 2 ~ $ 60

    The first step is to build the device. The materials necessary can be referred at Table 1 which can be bought at an electronic shop or online shop. Once the materials are available, they should be assembled as shown on Figure 1. The solar panel is used to charge the power bank and should be exposed to sunlight. The power bank should be used to power the USB computer devices and the device that provides Internet connection if necessary. Fig. 1 The Web Application Architecture, the client side is the representation interface which is an HTML page embedded with Javascript a client side programming language to capture the reading pattern of the user. The server side is the web API written using server side programming language to retrieve the captured data from the client and put them on the database. The web API can also show the data that is already on the database.

    The second step is to build the software. Although other computers and accessories are not necessary during mining, they are necessary during building the software. Generally, there are four steps in building the software which are 1) installing the operating system, 2) installing the miner and its dependencies, 3) choose a coin to mine, 4) joining a pool or setup solo mining, and 5) creating a cryptocurrency wallet. The third step is mining which is the last step.


    Table 2. Asus tinker board average resource consumption.

    Device CPU RAM Data Rate Power
    ATB CPU 100 % 205 MB 0.626 kBps 3.55 w
    ATB GPU 25 % 800 MB 0.53 kBps 4.29 w
    ASIC USB 2 % 200 MB 1.064 kBps 8.21 w

    This discussion contains the limit of the solar panel, the overall resource usage of mining, and the financial report. On Table 2, the power consumption shows that the power bank on Table 1 can last from 12 to 33 hours. The solar panel in average will take 30 hours for the power bank to fully charge. During mining, the power usage on Table 2 is larger than the generated power by the solar panel on Table 3 which makes charging on the fly less recommendable.

    Table 3. Solar power generated daily.

    Average Input Daily Sunlight Electricity Daily
    3.825 watt (w) 12 hour (h) 45.9 watt hour (wh)

    The financial report is the main interest for the public where the main question is how profitable this method is which is described on Table 4. The only asset is the computer device itself which generates income while the others are liabilities which are the running cost where the popular ones are electricity and Internet cost. The variables that determines the mining income are described on Table 5 where all variables are dependent on the coin where on this case, Litecoin is used.

    Table 4. Profitability table.

    Variable Category Value
    Mining income: ATB CPU Asset LTC 10-12 x17/s
    ATB GPU LTC 10-11 x 2/s
    ASIC USB LTC 10-9 x 12/s
    Electricity Liability $ 0.18 / kwh [5]
    Internet Liability $ 8.95 / month (3.1 GB)

    Table 5. Variables that affects mining income.

    Variable Value
    Hash rate Asus Tinker Board CPU 2.06 kH/s
    Asus Tinker Board GPU 24.2 kH/s
    Futurebit Moonlander 2 3.3 MH/s
    Block difficulty 15,608,688
    Block reward LTC 25
    Coin price LTC 1 =$ 94.22
    Current Profit 2.7809 USD/Day for 1 GHash/s

    The hash rate is dependent on the hardware and software where higher hash rate means higher income. The block difficulty depends on the total miners but more accurately the total hash rate on the network. From financial point of view, the block difficulty represents the competition where the higher the block difficulty, the less the income. The block reward is the reward for solving blocks where higher reward means higher income. The coin value or coin price is a highly debated topic up to today. Discussing the correct value of coins is too much to be put on this article. For now, this article refers the coin price to united state dollars (USD). The formula to calculate the amount of bitcoin obtained from mining can be seen on Formula 1. For other coins, the formula can be slightly different but should follow similar concept to Formula 1.

    Expected Payout in BTC = HtB/223D [3] (1)

    H = hashrate, T = time, B = block reward, D = block difficulty

    he main discussion of this article is about Table 6. Table 6 shows how much money can be earned using this articles method, but the Internet cost is omitted for this article to limit complication because in reality the Internet is not only used for mining but also for all other activities. Additionally, the profit of regular mining by paying electricity is compared to using this article’s method by generating own electricity with solar panel power bank. For regular mining instead is not a profit but a loss. For mining with this article’s method is profitable but limited to the daily mining time because of the generated power on Table 3 is not enough to run the mining for the whole day. From the data on Table 2 and Table 3, it is possible to calculate the daily mining time on Table 6. Thus, the daily income is the multiplication of Table 4 mining income converted in USD and the daily mining time on Table 6. The overall financial result of mining using this article’s method is that the method is able to reap profit where usually it is not profitable.

    Table 6. Income rate of mining with paying electricity versus getting electricity from solar panel.

    Device Daily Income with Electricity Daily Mining Time Solar Daily Income with Solar Power Bank
    ATB CPU $ -0.015322158 12 h 56 m $ 0.000007456
    ATB GPU $ -0.018370232 10 h 42 m $ 0.000072479
    ASIC USB $ -0.02596304 5 h 35 m $ 0.002218685


    This article successfully implemented mining with a single board computer without paying electricity cost by harvesting solar energy. The method is well suited for households because the materials are affordable and easy to obtain, and the assembly process are not complicated. Although the financial report shows profit, but the profit is extremely small that it would take a year to obtain a dollar. The problem is Litecoin. In this work, Litecoin is chosen because it is mineable on all CPU, GPU, and ASIC. In reality, different hardware have different profitable coins for mining. For example mining Magicoin on CPU can profit $ 0.0026 a day which is 349 times more profitable that Litecoin. Other factors are speculative price of the coins, for example nobody predicted that the price of bitcoin could rise from $1 to $10000 in ten years. This work is only an introduction where many possibilities are not yet explored. Other than constantly searching and switching to the right coin to mine, device expansion may increase income. Also, there are other types of renewable energies that are still not utilized. Aside from financial, this innovation is good for education, cryptocurrency contribution, and hobby.


    1. Nakamoto S. Bitcoin: A peer-to-peer electronic cash system, 2008.
    2. Bendiksen, C. Surprise: Majority of BTC Energy Sourced from Hydro / Wind / Solar ♻. Medium, CoinShares, 2019 June 06,
    3. Rosenfeld M. Analysis of bitcoin pooled mining reward systems. arXiv preprint arXiv:1112.4980. 2011 Dec 21.