Angular 4: Generating Components and How to Use

To create a new component in Angular 4 automatically with the Angular CLI, in the projects root directory type,

$ ng generate component my-component

Where ‘my-component’ is the name of the component. The commands ‘generate’ and ‘component’ can respectively be replaced with ‘g’ and ‘c’. The following command does the same as the above command,

$ ng g c my-component

This will create a new folder in the app directory with the following files,

The command also makes changes to the file src/app/app.module.ts on lines 7 and 12,

Also the file src/app/my-component/my-component.component.ts is created as follows,

These changes could all be implemented manually instead of using the command.

To create a new component without the src/app/my-component/my-component.component.spec.ts which is for unit unit testing,

$ ng g c my-component –spec false (Note: before the spec there are two hyphens)

my-component.component.ts contains

templateUrl: ‘./my-component.component.html’,

to use the contents of srcv/app/my-component.component.html in the root app html file src/app/app.component.html add a custom html tag with the component name as follows,

<app-my-component></app-my-component>

Angular 4: Data Binding Two-Way-Binding

Initially the button is disabled until a username is input, after which time it is possible to reset the username by clicking the button.


In app.component.ts he simply declare two variables and set ‘username’ to an empty string. The function clears the string and resets the username.

export class AppComponent {
username: string = '';
allowButton: boolean = false;
onUsernameReset(){
this.username = '';
}
}

In app.component.html we use [(ngModel)] for two-way-databinding. This uses both square and round brackets. The input is set to ‘username’ and because of the two-way-binding this is available everywhere, e.g. for string interploation on the next line using {{ username }} and for property binding [disabled]=”!username”

<div class="container">
<div class="row">
<div class="col-md-4">
<label for="box">Username:</label>
<input name="box" type="text" class="form-control" [(ngModel)]="username"/>
<p> {{ username }} </p>
<button class="btn btn-primary" [disabled]="!username" (click)="onUsernameReset()">Reset username</button>
</div>
</div>
</div>

The ‘click’ event is put in round brackets and it calls the function onUsernameReset() which can be found in app.component.ts

Angular 4: Data Biding Event Binding

When an event occurs i.e. clicking a button we want a function onButton() to be called.

In app.component.ts we have the function onButton() which simply logs a few words to the console,

export class AppComponent {
onButton(){
console.log("You pressed it, well done!")
}
}

In app.component.html the event, in this case ‘click’, is in round brackets,

<button class="btn btn-primary">Button</button>

Angular 4: Basic Setup & Bootstrap

https://cli.angular.io/

Install Angular 4 using the Angular CLI tool,

$ npm install -g @angular/cli

Create a new app or project,

$ ng new my-new-app

This creates a new directory, next move in to it,

$ cd my-new-app

Run the app in development,

$ ng serve

Install bootstrap,

$ npm install --save bootstrap

Then add bootstrap to .angular-cli.json which can be found in the root of the project. Find the ‘styles’ and add the path for bootstrap like this,

"styles": [
"../node_modules/bootstrap/dist/css/bootstrap.min.css",
"styles.css"
],

It may be necessary here to run,

$ npm install

from the project root directory, the directory containing the node_modules directory.

You can then use bootstrap in my-new-app/src/app/app.component.html

<div class="container">
<div class="row">
<div class="col-md-4">one</div>
<div class="col-md-4">two</div>
<div class="col-md-4">three</div>
</div>
</div>

GIT Notes: Remote Repositories

GIT Succinctly – free eBook from Synfusion

    Chapter 6 Remote Repositories

Manage connections to other repositories, and list them with,

$ git remote

To get more information about remote repos,

$ git remote -v

Create a new connection to a remote repository,

$ git remote add <user-name> <path-to-repo>

Now you can reach that repo with user-name instead of typing it all out.
Git accepts protocols, file://, ssh://, http//, and git://
example,

$ git remote add <user-name> ssh://git@github.com/<user-name>/<repo-name>.git

Repo can be found at, github.com/user-name/repo-name.git
Delete a remote connection,

$ git remote rm <repo-name>

Remote branches represent a branch is someone else’s repository.
fetching, the act of downloading branches from another repository.

$ git fetch <repo> <branch>

Omit branch if you want all branches in a repository.
View downloaded branches with,

$ git branch -r

Remote branches are prefixed with ‘origin’. You can look at their history with,

$ git checkout

Remote branches behave like read-only branches until you integrate them into your local repository.
Display new updates from ‘origin/master’ not your local ‘master’, like this,

$ git log master..origin/master

You can checkout remote branches but this will put you in a detached HEAD state, and without a branch changes will be lost, unless you create a new local branch tip to reference them.

Incorporate changes from origin/master into a local branch,

$ git checkout <mybranch>
$ git fetch origin
$ git merge origin/master

This results in a “3-way” merge. Your master has merged with origin/master and that then merges with your branch. This will produce many meaningless merge commits.
This can be overcome with rebasing,

$ git checkout <mybranch>
$ git fetch origin
$ git rebase origin/master

‘pull’ is the ‘fetch/merge’ sequence combined.
Fetch the origin’s master branch and merge it into the current branch.

$ git pull origin/master

That will merge, but if you’d prefer to rebase use the ‘–rebase’ flag, which I think would be used like this,

$ git pull --rebase origin/master (not certain that’s where the flag goes in this command)

Send local branch to a remote repository,

$ git push <remote> <branch>

This creates a local branch on the remote repository. (So a local branch can appear on a remote repository if someone remotely pushes to it. I think!!!)

Public repositories are bare repositories, they do not have working directory.

Create a bare repository with,

$ git init --bare <path>.git

Bare repositories only function as storage facilities.

A push to a origin/master may be aborted because your local branch is not in sync.
Synchronise with a central repository,

$ git fetch origin master
$ git rebase origin/master
$ git push origin master

GIT Notes: Branches

GIT Succinctly – free eBook from Synfusion

    Chapter 5 Branches

A new branch is a new development environment with an isolated working directory, staging and project history.
‘git branch’ is used for listing, creating, and deleting branches.
To view existing branches,

$ git branch

The output will indicate the currently checkedout branch with an asterisk.
The master branch is git’s default branch.
Create a new branch,

$ git branch <branch-name>

This does not switch you to the new branch, only creates a pointer to the current HEAD. (but I thought HEAD was the current commit!)
To switch branch,

$ git checkout <branch-name>

Initially both branches reference the same commit, but any new commits will be exclusive to the current branch and so each branch will have a HEAD of its own.
To delete a branch,

$ git branch -d <branch-name>

This will not delete branches with unmerged conflicts, to force the deletion,

$ git branch -D <branch-name>

After switching to a new branch, your working directory is updated to match the specified branche’s commit. (I think this just means, the latest commit or HEAD will be on this branch. Isn’t that obvious, since the HEAD is another name for the latest commit? Or maybe the HEAD is always the latest commit, but will take it from the checked out branch.)
Make sure your working directory is ‘clean’, meaning no uncommitted changes, before checking out another branch. Otherwise ‘git checkout’ could overwrite your modifications.
Using ‘git checkout’ with tags and commit IDs puts you in a detached HEAD state. his means you are not on a branch anymore, you are directly viewing a commit and will lose all work as soon as you switch to a real branch.
Creating a new branch in a detached HEAD state,

$ git checkout -b <new-branch-name>

This is a new branch reference to the formerly detached state.
In this case ‘-b’ is a shortcut for the two commands,

$ git branch <new-branch-name>
$ git checkout <new-branch-name>

(But how do you use ‘git checkout’ with tags and commit IDs to create a detached HEAD state.)
Merging is the process of pulling commits from one branch into another.
Merge methodologies: “fast-forward” merge, or a “3-way” merge.
The branch you want to merge into must be checked out and the target branch will remain unchanged.
To merge a branch called ‘sidebranch’ in master,

$ git checkout master
$ git merge sidebranch

This leaves sidebranch unchanged.
“fast-forward” merge: I create a new branch off master and add two commits to it, this type of merge syncs master with that latest branch commit. Master then contains all desired history and the branch can be deleted.
This was a simple situation because there were no extra commits on the master. If there are use a “3-way” merge.
“3-way” merge: I create a new branch off master and add two commits, during which time the master gets a new commit. To merge git generates a new merge commit, a combined snapshot of both branches. This commit has two parents commits and a history from both branches.
After using,

$ git checkout master

both types of merge use the same command,

$ git merge sidebranch

Two branches that make different changes to the same code portion is a merge conflict. This can not occur in a “fast-forward” merge.
When a merge conflict occurs use,

$ git status

If you get a merge conflict open the file and you’ll see something like,

<<<<<<< HEAD
This content is from the current branch.
=======
This is a conflicting change from another branch.
>>>>>>> some-feature

Delete the content you don’t want including all of the <<<<<< , ======= , >>>>>>> lines also.
Then do,

$ git add <file>

and

$ git commit

Rebasing requires a branch to be checked out,

$ git checkout branch

Then to move the entire branch onto the tip of master,

$ git rebase master (but what happens to the branch then?)

This results in the same snapshot as if branch was merged with master.
“3-way” merge results in an extra merge commit. Rebasing has no extra commits and results in a cleaner linear history.
Change commits as you’re moving them to the new base, by specifying an interactive rebase,

$ git rebase -i master

This populates an editor with commits from the branch.
Specify an interactive rebase as follows,

pick 58dec2a First commit for new feature
squash 6ac8a9f Second commit for new feature

‘pick’ moves the first commit to the new base just as in ‘git rebase’. ‘squash’ combines the second commit with the previous one, so you end up with one commit containing all the changes.
Interactive rebasing lets you rewrite a branches history, you can add intermediate commits, then go back and fix them into more meaningful progression afterwards.
never rebase a branch that has been pushed to a public repository

GIT Notes: Undoing Changes

GIT Succinctly – free eBook from Synfusion

    Chapter 4 Undoing Changes

Undoing changes in git could mean,

  • Undo changes in working directory
  • Undo changes in staging area
  • Undo a entire commit

Changes can be made by deleting a commit or by using a new commit to undo changes introduced by the first commit.
The recent commit is the ‘HEAD’.
Make the working directory and the stage, match the files in the most recent commit. (This will actually change files in your working directory!!!)

$ git reset
$ git reset --hard HEAD (not sure about difference between these?)

To get rid of (meaning to delete) untracked files,

$ git clean

‘-f’ option forces deletion of these files,

$ git clean -f

(What is the difference between an ‘option’ e.g ‘-f’ and a ‘flag’ e.g. –hard? Or are these terms used interchangeable? Perhaps options are single character and reppended with a single hyphen and flag are mutiple charactered and preppended with double hyphens?)
A ‘reset –hard’ makes all files in the working directory and staging sync with the lastest commit.
Make a single file in the working directory match the version in the most recent commit, (bypasses staging)

$ git checkout HEAD

You can replace HEAD with a commit ID, branch or tag to make the file match the version in that commit. This command is like
NB Do Not Try this with ‘git reset’
To unstage a file,

$ git reset HEAD

This will not change the working directory.
To reset every file in the working directory and staging,

$ git reset --hard HEAD

This results in an unstaged modification in ‘git status’.
Undoing commits:
‘reset’ will remove it from project history
‘revert’ which generates a new commit that gets rid of changes introduced in the original.
To move the HEAD reference back one commit,

$ git reset HEAD ~1

This removes the most recent commit. Or you can go back two commits with ‘~2’, this also removes the two most recent commits. Not good to do this on public projects with other collaborators.
Reverting adds a new commit that undoes the problem commit

$ git revert

This takes the changes in the specified commit, figures out how to undo them, and creates a new commit with the resulting changeset. This t=is the way to undo a commit with a public repository.
Replace a previous commit instead of creating a new one,

$ git commit --amend

This rewrites history!

GIT Notes: Recording Changes

GIT Succinctly – free eBook from Synfusion

    Chapter 3 Recording Changes

A ‘snapshot’ is a complete record of state of files, not of differences between other states.
‘staging’ allows you choose what changes go into the commit.
To stage files,

$ git add (can multiple files be listed here?)

or to stage all,

$ git add .

To stop tracking a file, in other words to delete it from the project but not the working directory,

$ git rm --cached(can multiple files be listed here)

To view status of working directory and staging area,

$ git status

To output status of every unstaged change in your working directory,

$ git diff

To output difference of all staged changes

$ git diff --cached

To display committed snapshots,

$ git log
$ git status (what does this get?)

We start with ‘working directory’. This is ‘staged’ with ‘git add .’ and it is now a ‘staged snapshot’ which can then be ‘commited’ to ‘history’. And I suppose it is then ‘pushed’ to a remote repo.
A ‘commit’ is a saved version or ‘snapshot’ of the project, containing user info, date, commit message and SHA-1 checksum of entire contents.
A ‘commit’ is a step removed from working directory.
To commit staged snapshot to and add it to the history of the current branch,

$ git commit

You’ll be asked for a commit message,
Alternatively if the message is short you can use,
$ git commit -m “commit messsage goes here”
Display current branch’s commits,

$ git log (already mentioned above)

For working directory and stage we use: git add, git rm, and git status
For commit history: git commit, and git log
To display each commit on a single line,

$ git log -oneline (is the single hyphen correct)

To display history of an individual file,

$ git log --oneline (is the double hyphen correct?)

Filter commits, display commits contained in but not in . Both arguments can be commit ID, branch name or a tag,

$ git log .. (this is not clear)

To see what files were effected by a particular commit, display a diffstat of the changes in each commit,

$ git log -stat

Tags are simple pointers to commits. Create a new tag,

$ git tag -a v1.0 -m "Stable release"

-a creates an annotated tag and -m lets you record a message.
List your existing tags,

$ git tag

GIT Notes: Overview & Configuration

GIT Succinctly – free eBook from Synfusion

    Chapter 1 Overview
  • working directory
  • staging
  • commited history
  • development branch
    Chapter 2 Configuration

Configuration file in home directory,

$ cat ~/.gitconfig

To git init a directory,

$ git init <path-to-directory>

Omit path to git init current directory.

Includes a directory ‘.git’.
‘clone’ downloads a complete copy of repo. Clone can be used instead (or as well as) ‘git init’.

$ git clone ssh://@/path/to/repo.git (never used ssh here before)

Could also

$ git clone

Python append variable to ever occurrence of a substring

Using the function from Stackoverflow,

‘s’ is the original string, ‘sub’ the substring to be replaced, ‘repl’ the string to replace it with and ‘nth’ the nth occurence of the substring in the string ‘s’.

def nth_repl(s, sub, repl, nth):
find = s.find(sub)
i=find != -1
while find != -1 and i != nth:
find = s.find(sub, find+1)
i+=1
if i == nth:
return s[:find]+repl+s[find+len(sub):]
return s

Count occurences of the substring,

num = str.count("text")

Placing this function in a loop which will iterate through every occurrence of the substring in the string and append the index of each nth occurrence with that index.

for i in range(0,num+1):
newstr = nth_repl(str, "text", "text%d"%(i-1), i)
str = newstr

And then,

print str

Hardening SSH

SSH created key-pair

More information on this topic at securing-your-server.
On your local machine, create a 4096-bit RSA key-pair,

$ ssh-keygen -b 4096

If you get a message to say that this already exists or to over write a file, then do not proceed, unless you want to create a new key-pair. If one exists, then it is probably being used to ssh into somewhere else. Don’t want to mess that up!!!
To check if it already exists,

$ cat ~/.ssh/id_rsa

Copy key to remote server

Next copy this key to the remote server,

$ ssh-copy-id <username>@<remote-server>

Now login to the remote server. You should not be asked for the password, since the key-pair now exists.

We do not want root logins. Open,

/etc/ssh/sshd_config and navigate to ‘#Authentication:’ and change to,

PermitLogin no

and also change to this, further down,

# Change to no to disable tunnelled clear test passwords
PasswordAuthentication no

Raspberry Pi Linux – Managing User Accounts

This information is available on the Rspberry Pi site here. I am just recapping for myself.

Change a user’s password

$ sudo passwd

Remove a user’s password

$ sudo passwd -d

Create a new user

$ sudo adduser

Remove a user,

$ sudo userdel -r

‘pi’ is a ‘sudoer’, which means this user can run as root when a command is preceded by ‘sudo’.
To switch to root,

$ sudo su

Add a user to ‘sudoers’,

$ sudo visudo

And add a line under ‘root ALL=(ALL:ALL) ALL’, eeplacing ‘root’ with e.g.,

ALL=(ALL:ALL) ALL

Raspberry Pi 3 Running noip2 systemd (part 9)

I find whenever the Pi has to reboot that I have to start noip2 each time with,

$ sudo noip2

This is exactly where systemd would be very useful. It would automatically restart the noip2 on boot or anytime it stops.

First we need to find where the noip2 binary is,

$ which noip2
/usr/local/bin/noip2

This is useful for finding the file that runs any command e.g

$ which ls
/bin/ls

Anyway, now we know that noip2 runs at /usr/local/bin/noip2. I would like to create a systemd service to run that binary file.

Create the file noip2.service at

/etc/systemd/system/noip2.service

[Unit]
Description=No-ip.com dynamic IP address updater
After=network.target
After=syslog.target

[Install]
WantedBy=multi-user.target
Alias=noip.service

[Service]
# Start main service
ExecStart=/usr/local/bin/noip2
Restart=always
Type=forking

I obtained this file from Nathan Giesbrecht’s GitHub page.

Enable the service, so that it will start on boot.

$ sudo systemctl enable noip2.service

Then start it,

$ sudo systemctl start noip2.service

Sometimes it is necessary to also,

$ systemctl daemon-reload

Check the status of this new service with,

$ systemctl status noip2.service

To test it, reboot your Pi.

$ sudo reboot

Check that the noip2 is running automatically with,

$ ps -aux|grep noip2

The second line is I presume the process to run this command, so ignore it. The first line shows this process is running.

Raspberry Pi 3 Express Application (part 8)

Check Node and npm are installed,

$ node --version
v7.4.0
$ npm --version
4.0.5

Install Express generator globally,

$ npm install express-generator -g

Check it,

$ express --version
4.14.0

Create a new directory inside /opt/nodeserver

$ mkdir tempGraph

This will create several folders as well as the package.json file with all dependencies. Next install these dependencies,

$ sudo npm install

This automatically locates package.json in the same directory as the command is run in and installs them.

Start express with,

$ npm start

open another terminal on the same system and type,

$ curl localhost:3000

You should see same HTML,

Install nodemon to automatically restart application should it be updated, which will happen frequently during development.

$ npm install -g nodemon

To start nodemon, first top the application and be sure to be in the applications directory, then,

$ nodemon

Nodemon is for development not production.

Raspberry Pi 3 Crontab (part 7)

To run a file every 5 minutes we will use crontab,

$ crontab -e

And add the following to the bottom of that file, which opens with the ‘nano’ editor.

# m h dom mon dow command
*/5 * * * * python /opt/nodeserver/temperature.py # At every 5th minute
1 */1 * * * python /opt/nodeserver/temperatureHourlyAverages.py # At minute 1 past every hour
2 * */1 * * python /opt/nodeserver/temperatureDailyAverages.py # At minute 2 on every day-of-month

This data can be graphed using d3.js The file and graph are available at shanespi.no-ip.biz if the server is running at the time, that is. Next I want to run this with the Express backend framework as a systemd service.

Systemd could be used to run the python files instead of the crontab.

Raspberry Pi 3 DS18B20 Temperature Sensor (part 6)

Open /boot/config.txt

$ sudo nano /boot/config.txt

and add

dtoverlay=w1-gpio

to the end of the file.

Reboot the system,

$ sudo reboot

Then run,

$ sudo modprobe w1-gpio
$ sudo modprobe w1-therm

The change directory /sys/bus/w1/devices/ and list its contents,

Change directory into 28-000006c87ee2 and then ‘cat w1_slave’.

The ‘t=16125’ this a temperature reading of 16.125 degrees celcius from the temperature sensor.

Check out Adafruit’s article on reading the DS18B20.

Using Python to read the temperature sensor and output the value to a file data.json

# /opt/nodeserver/temperature.py
# crontab -e runs this file every 5 minutes
# Reads temperature from sensor
# Reads /var/www/html/data/data/json file
# Appends a reading and removes earliest reading, only ever 13 readings allowed
# Dumps data back into /var/www/html/data/data,json

import os
from decimal import Decimal
import json
import datetime
import time

now = datetime.datetime.now()

# Open the file that we viewed earlier so that python can see what is in it. Replace the serial number as before.
tfile = open("/sys/bus/w1/devices/28-000006c87ee2/w1_slave")
# Read all of the text in the file.
text = tfile.read()
# Close the file now that the text has been read.
tfile.close()
# Split the text with new lines (\n) and select the second line.
secondline = text.split("\n")[1]
# Split the line into words, referring to the spaces, and select the 10th word (counting from 0).
temperaturedata = secondline.split(" ")[9]
# The first two characters are "t=", so get rid of those and convert the temperature from a string to a number.
temperature = Decimal(temperaturedata[2:])
# Put the decimal point in the right place and display it.
temp = temperature/1000

with open('/opt/nodeserver/data/data.json', 'r') as f:
data = json.load(f)

data[0]["fiveMinReadings"].append({"temp": str(temp), "date": str(now)})

while len(data[0]["fiveMinReadings"]) > 13:
del data[0]["fiveMinReadings"][0]

with open('/opt/nodeserver/data/data.json', 'w') as f:
json.dump(data, f)

temperatureDailyAverages.py calculates an average of the last 13 values from the temperature.py file, effectively finding an average for each hour. The result is appended to data.json

# /opt/nodeserver/temperatureHourlyAverages.py
# crontab -e runs this file every hour
# Reads /var/www/html/data/data/json file
# Calculates average reading from last hour
# Appends a reading and removes earliest reading, only ever 25 readings allowed
# Dumps data back into /var/www/html/data/data.json

import os
from decimal import Decimal
import json
import datetime
import time

now = datetime.datetime.now()
with open('/opt/nodeserver/data/data.json', 'r') as f:
data = json.load(f)

sum = 0;
for i in range(0,len(data[0]["fiveMinReadings"])):
sum = sum + float(data[0]["fiveMinReadings"][i]["temp"])

average = sum/len(data[0]["fiveMinReadings"]) #this is the average reading from the previous 13 readings taken every 5 minutes
temp = round(average, 2)

data[1]["hourlyAverages"].append({"date": str(now),"temp": str(temp)})

while len(data[1]["hourlyAverages"]) > 25:
del data[1]["hourlyAverages"][0]

with open('/opt/nodeserver/html/data/data.json', 'w') as f:
json.dump(data, f)

Similarly the temperatureHourlyAverages.py

# /opt/nodeserver/temperatureHourlyAverages.py
# crontab -e runs this file every day at midnight 00:00
# Reads /var/www/html/data/data/json file
# Calculates average reading from last 24 hours
# Appends a reading and removes earliest reading, only ever 30 readings allowed
# Dumps data back into /var/www/html/data/data.json

import os
from decimal import Decimal
import json
import datetime
import time

now = datetime.datetime.now()
with open('/opt/nodeserver/data/data.json', 'r') as f:
data = json.load(f)

sum = 0;
for i in range(0,len(data[1]["hourlyAverages"])):
sum = sum + float(data[1]["hourlyAverages"][i]["temp"])

average = sum/len(data[1]["hourlyAverages"]) #this is the average reading from the previous 30 readings taken every hour
temp = round(average, 2)

data[2]["dailyAverages"].append({"date": str(now),"temp": str(temp)})

while len(data[2]["dailyAverages"]) > 30:
del data[2]["dailyAverages"][0]

with open('/opt/nodeserver/data/data.json', 'w') as f:
json.dump(data, f)

Raspberry Pi 3 Node Server & systemd (part 5)

Run a simple Node server. Create the file and path,

/opt/nodeserver/server.js

const http = require('http');
const hostname = '0.0.0.0'; // listen on all ports
const port = 8080;
http.createServer((req, res) => {
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('Hello people, I am a server \n' + new Date().toISOString());
}).listen(port, hostname, () => {
console.log('Server running at http://' + hostname + ':' + port);
});

Run this,

$ node server.js

But this will stop running if you ctrl-c. Therefore we will use Systemd to create a service which will run this server continuously and also start it again in the case of any interruption.

Change directory to

$ cd /etc/systemd/system

Create the file nodeserver.service

$ sudo nano nodeserver.service

and add the following [Unit], [Service] and [Install] as follows,

[Unit]
Description=Node.js Example Server
#Requires=After=mysql.service # Requires the mysql service to run first

[Service]

ExecStart=/usr/bin/node /opt/nodeserver/server.js
Restart=always
RestartSec=10 # Restart service after 10 seconds if node service crashes
StandardOutput=syslog # Output to syslog
StandardError=syslog # Output to syslog
SyslogIdentifier=nodejs-example
User=pi
Group=root
Environment=NODE_ENV=production PORT=8080

[Install]
WantedBy=multi-user.target

You may need to change to ‘User’ and ‘Group’. Make sure the PORT=8080 is the same port as in server.js

Next we need to ‘enable’ and ‘start’ the service

$ sudo systemctl enable nodeserver.service
$ sudo systemctl start nodeserver.service

You can also use ‘restart’, ‘stop’ and ‘status’ to check that the service is running.

$ sudo systemctl restart nodeserver.service
$ sudo systemctl stop nodeserver.service
$ sudo systemctl status nodeserver.service

You will need to reload the daemon with

$ sudo systemctl daemon-reload

After ‘reload’ you will need ‘restart’ the service.

$ sudo systemctl restart nodeserver.service

Most of the information here is from the excellent tutorial at nodejs service with systemd
Another good source,
Deploying nodejs applications with systemd

You can check this in your browser with,

curl http://0.0.0.0:8080

If you’ve set up port-forwarding and noip.com you’ll be able to go to any browser on the internet and see the output.

Raspberry Pi 3 Static IP Address & No-ip setup (part 3)

To set up a static IP address on the Pi 3 which will remain constant inside you LAN open /etc/dhcpcd.conf

$ sudo nano dhcpcd.conf

Add to the bottom of this file,

interface eth0

static ip_address=192.168.1.30
static routers=192.168.1.1
static domain_name_servers=192.168.1.1

interface wlan0

static ip_address=192.168.1.30
static routers=192.168.1.1
static domain_name_servers=192.168.1.1

Here 192.1.68.1.30 is the static ip address I am creating.
192.1.68.1.1 is the gateway address, the address of my router from inside the network.

You will need to access your router’s settings and reserve the ip address 192.1.68.1.30 or whatever you chose, so that no other device on your network will be allocated this address.

Your router external address, that is the ip with which it is identified on the internet is usually dynamic. You can fix this using ‘no-ip’, this will allow you set it to a name instead of having to find the ever changing ip address.

Sign up for a free account at no-ip.com The free account needs to be renewed every 30 days. But this is a excellent and very useful service.

The instructions for setting up noip2 on a Raspberry Pi 3 can be found on the no-ip website.

Raspberry Pi 3 Wifi setup (part 2)

First slot in the SD card with the ethernet cable plugged in and USB mouse and keyboard and also a monitor usually connected by a HDMI connection.

Ethernet should be working. Check by pinging a site in the terminal,

$ ping duckduckgo.com

If this doesn’t work you won’t be able to upgrade the system.

Update and upgrade with the Debian package manager, ‘apt-get’

$ sudo apt-get update
$ sudo apt-get upgrade

Then update the Linux distribution,

$ sudo apt-get dist-update

Change directory to the wpa_supplicant.cong file to set the wifi network and password.

$ cd /etc/wpa_supplicant

Then open the file wpa_supplicant.conf

$ sudo nano wpa_supplicant.conf

At the end of the file add,

network={
ssid="your-network-name"
psk="your-wifi-passord"
}

Ctrl-X and Y will save this file. Ctrl-o will save without closing.

At this stage I went to the Pi icon in the top left of the screen and ‘Preferences’ > ‘Raspberry Pi Configuration’

Go to ‘Change Password’. The current password is ‘raspberry’ and choose a new one.
Then under the interfaces tab, it is a good idea to at least enable SSH. Then under the ‘Localisation’ tab set the ‘Set Locale’, ‘Set Timezone’ and ‘Set WiFi Country’.

Next reboot,

$ sudo reboot

When teh Pi boots again change your ip address on the network with,

$ ifconfig

You should see three blocks, one labelled ‘eth0’ for ethernet, ‘lo’ fpr loopbacka nd ‘wlan0’ for the wifi connection. The wlan0 block should have an ip address such as 192.168.1.11 in the line,

inet addr: 192.168.1.11

This is the Pi’s ip address on your local network. Take note of it.

Go to another computer on your network and open a terminal or command line interface (CLI). Type,

$ ssh pi@192.168.1.11

You will be asked for a password and hopefully you will be connected to the pi’s command line and so not need to monitor. This is called a headless connection.

Raspberry Pi 3 SD card image setup (part 1)

Download the latest Raspbian

Raspbian Jessie with Pixel

Unzip the .zip file

$ unzip 2016-11-25-raspbian-jessie.zip

This will create a new image file,

2016-11-25-raspbian-jessie.img

Look at the mounted filesystems before you put in the microSD card in it’s adapter,

$ df -h

Place the microSD card in the SD adapter and slot it into your computer. Check the name for this on your system,

$ df -h

Here we see the new partitions on the SD card, which were not present the first time we ran ‘df -h’. These are ‘mmcblk0p3′,’mmcblk0p5’ and ‘mmcblk0p6’. The card itself is ‘mmcblk0’. If this is a new card you won’t see these with ‘df -h’ In that case change directory to /dev

$ /dev
$ ls

and you will see the mmcblk0** partitions.

If they do not show up with ‘df -h’ then they are not mounted. You do not want them mounted when you copy the image to the SD card. It is important to unmount this partitions with umount,

Now we are ready to put the .img file on the microSD card at ‘mmcblk0’. ‘mmcblk0’ with no partition after it references the whole card, which is what we want.

We will use ‘dd’ to copy the image onto the SD card.

Then remove the card and slot it into your Pi 3.

I don’t know why, but I had to repeat this process. The card did work in the Pi3 but the wifi wouldn’t connect or the ethernet connection.

Setting up gh-pages

Login to github.com and create a new reporitory. Copy the link for this repository. It will look like this,

https://github.com/<user-name>/<repo-name>.git

On your local machine create a new directory,

$ mkdir testGHpages

cd into it

$ cd testGHpages

git init this directory

$ git init

Add a simple index.html file to this directory.
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
</head>
<body>
<span>Hello world!</span>
</body>
</html>

Add all directory contents,
$ git add .

Commit directory contents,
$ git commit -m "initial commit"

Add your the repositories which you made to this directory.
$ git remote add origin https://github.com/<username>/<repo-name>.git

Push the contents of this directory to the repository on github.
$ git push origin master
Your username and password will be required.

Create a gh-pages branch,
$ git checkout -b gh-pages

Push your the contents to this branch,
$ git push origin gh-pages

You should be able to see your page at,
https://<user-name>.github.io/<repository-name>

Not necessary but you can merge your main branch with the gh-pages branch with,
$ git merge master

To go back to the main branch,
$ git checkout master

D3.js Basic json Graph

Basic example of graphing using D3.js
Useful links used for this post:
http://alignedleft.com/tutorials/d3/axes
https://square.github.io/intro-to-d3/

<!DOCTYPE html>
<html lang='en'>
  <head>
    <meta charset='utf-8'>
    <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.1.1/jquery.min.js" type="text/javascript"></script> 
    <script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/d3/3.5.17/d3.min.js"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/crossfilter/1.3.12/crossfilter.min.js" type="text/javascript"></script>
    <script src="http://cdnjs.cloudflare.com/ajax/libs/dc/1.7.5/dc.min.js" type="text/javascript"></script>
    <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" type="text/javascript"></script>
    <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">
    <link rel="stylesheet" href="css/tempGraph.css">
    <link href="http://cdnjs.cloudflare.com/ajax/libs/dc/1.7.5/dc.min.css" rel="stylesheet" type="text/css">
    <script src="https://d3js.org/d3-array.v1.min.js"></script>
    <script src="https://d3js.org/d3-collection.v1.min.js"></script>
    <script src="https://d3js.org/d3-color.v1.min.js"></script>
    <script src="https://d3js.org/d3-format.v1.min.js"></script>
    <script src="https://d3js.org/d3-interpolate.v1.min.js"></script>
    <script src="https://d3js.org/d3-time.v1.min.js"></script>
    <script src="https://d3js.org/d3-time-format.v2.min.js"></script>
    <script src="https://d3js.org/d3-scale.v1.min.js"></script>
    
  </head>
  
  <body>
    <div class='container' id='main-container'>
      <div class='content'>
	<div class='container' style='font: 10px sans-serif;'>
	  <h3></h3>
	  <div class='row-fluid'>
	    <div class='remaining-graphs span8'>
	      <div class='row-fluid'>
                <div class='pie-graph span8' id='dc-line-chart'>
 
		</div>
	      </div>
	    </div>
	  </div>
	</div>
      </div>
    </div>
    <script>
      var data = [
          {xVal : 1, yVal : 1},
          {xVal : 2, yVal : 4},
          {xVal : 3, yVal : 2},
          {xVal : 4, yVal : 3}
      ];
      var padding = 25;
      var w = 500;
      var h = 150;
      
      var xScale = d3.scaleLinear()
          .domain([0,d3.max(data, function(d) {return d.xVal})])
          .range([0, 200]);

      var yScale = d3.scaleLinear()
          .domain([0,d3.max(data, function(d) {return d.yVal})])
          .range([100, 0]);

      //var xAxis = d3.axisBottom(x) // d3 v.3 only
      var xAxis =  d3.svg.axis(xScale) // d3 v.4
          .ticks(4) // specify the number of ticks
          .scale(xScale)
          .orient("bottom");

      var yAxis = d3.svg.axis(yScale)
          .scale(yScale)
          .orient("left")
          .ticks(7);
      
      var svg = d3.select('#dc-line-chart')
          .append('svg')        // create an <svg> element
          .attr('width', w) // set its dimensions
          .attr('height', h);

        svg.append("g")
        .attr("class", "axis")
        .attr("transform", "translate(" + (padding) + "," + padding + ")")
        .call(yAxis);
	
	svg.append('g')            // create a <g> element
          .attr('class', 'axis')   // specify classes
	  .attr("transform", "translate(" + padding + "," + (h - padding) + ")")
          .call(xAxis);            // let the axis do its thing
	  
      var svg = d3.select('svg');
      svg.size();

      var rects = svg.selectAll('rect')
          .data(data);
          rects.size();
	
      var newRects = rects.enter();
      
      newRects.append('rect')
          .attr('x', function(d, i) {
              return xScale(d.xVal);
          })
          .attr('y', function(d, i) {
              return yScale(d.yVal);
	  })
	  .attr("transform","translate(" + (padding -5) + "," + (padding - 5) + ")")
          .attr('height', 10)
          .attr('width', 10);
      
    </script>
  </body>
</html>

tempGraph.css

.axis path,
.axis line {
    fill: none;
    stroke: black;
    shape-rendering: crispEdges;
}

.axis text {
    font-family: sans-serif;
    font-size: 11px;
}

Basics of MongoDB Shell

With MongoDB installed, (see Install Mongodb on Fedora 23)

get the server running with,

$ mongod

and remember this will hang, so open a new tab in your terminal to continue. In fedora this will be

Ctrl+shift+T

In that new terminal,

$ sudo mongo

Let us see what if any databases exist on the server,

 > show dbs

Say I have a database called ‘mydb’. Now if we want to use that databse we must use,

 > use mydb

To see what ‘collections’ or ‘tables’ exist in this database use,

 > show collections

If there is a collection called ‘movie’ and we’d like to look at it’s contents use,

 > db.movie.find()

To create a new database just start using it,

 > use newDB

Django: Multiple Pagination in a single Template

Assuming you would like to display two models in the same template each with their own pagination, here’s how to do it.

views.py

def myview():
        Model_one = Model.objects.all()
        paginator = Paginator(Model_one, 6)
        page = request.GET.get('page1')
        try:
            Model_one = paginator.page(page)
        except PageNotAnInteger:
            Model_one = paginator.page(1)
        except EmptyPage:
            Model_one = paginator.page(paginator.num_pages)

        Model_two = Model_other.objects.all()
        paginator = Paginator(Model_two, 6)
        page = request.GET.get('page2')
        try:
            Model_two = paginator.page(page)
        except PageNotAnInteger:
            Model_two = paginator.page(1)
        except EmptyPage:
            Model_two = paginator.page(paginator.num_pages)

        context = {'Model_one': Model_one, 'Model_two': Model_two}
        return render(request, 'template.html', context)

The important thing above is the ‘page1’ and ‘page2’.

In the template,

{% if model_one %}
          <div class="col-md-12 well">
            {% for item in model_one %}
            ..... iterates through model_one.....
            {% endfor %}
            <span class="step-links pagination">
                {% if model_one.has_previous %}
                    <a href="?page1={{ model_one.previous_page_number }}"> previous </a>
                {% endif %}
                <span class="current">
                    Page {{ model_one.number }} of {{ model_one.paginator.num_pages }}
                </span>
                {% if model_one.has_next %}
                    <a href="?page1={{ model_one.next_page_number }}"> next </a>
                {% endif %}
            </span>
          </div>
          {% endif %}
          {% if model_two %}
          <div class="col-md-12 well">
            {% for item in model_two %}
            ..... iterates through model_two.....
            {% endfor %}
            <span class="step-links pagination">
                {% if model_two.has_previous %}
                    <a href="?page2={{ model_two.previous_page_number }}"> previous 
                {% endif %}
                <span class="current">
                    Page {{ model_two.number }} of {{ model_two.paginator.num_pages }}
                </span>
                {% if model_two.has_next %}
                    <a href="?page2={{ model_two.next_page_number }}"> next </a>
                {% endif %}
            </span>
          </div>
          {% endif %}

Again using ‘page1’ and ‘page2’ to distinguish the pagination for each model.

Django-allauth Installation

There is a useful django-allauth video tutorial at Create third party Facebook login in Django. The document below simply follows that but only adds facebook social authentication. Documentation can be found at django-alluth documentation. django-allauth can also be installed by cloning from https://github.com/pennersr/django-allauth
I am running Django 1.9.5 and will install django-allauth 0.25.2 Setup a virtual environment if you haven’t already.

$ pip install django-allauth

Check the packages in your virtual environment with,

$ pip list

You can install the packages in requirements.txt to your virtual environment or whatever environment you are in with,

$ pip install -r requirements.txt

Put the packages in your virtual environment into your requirements.txt file with,

$ pip freeze -> requirements.txt

For this example the django project name is `crudProject’ and the app name is `crudapp’.┬áIn settings.py add appropriately to TEMPLATES so that it looks like this,

TEMPLATES = [
    {
        'BACKEND': 'django.template.backends.django.DjangoTemplates',
        'DIRS': [os.path.join(BASE_DIR, "templates")],
        'APP_DIRS': True,
        'OPTIONS': {
            'context_processors': [
                'django.template.context_processors.debug',
                'django.contrib.auth.context_processors.auth',
                'django.contrib.messages.context_processors.messages',
                'django.template.context_processors.request',
            ],
        },
    },
]

Bear in mind,

'django.template.context_processors.request',

may already be in the `context_processors’
Then add to settings.py,

AUTHENTICATION_BACKENDS = (
    # Needed to login by username in Django admin, regardless of `allauth`
    'django.contrib.auth.backends.ModelBackend',

    # `allauth` specific authentication methods, such as login by e-mail
    'allauth.account.auth_backends.AuthenticationBackend',
)

Then add to INSTALLED_APPS

INSTALLED_APPS = (
    ...
    # The Django sites framework is required
    'django.contrib.sites',
    'allauth',
    'allauth.account',
    'allauth.socialaccount',
    'allauth.socialaccount.providers.facebook',
)

and also add to settings.py,

SITE_ID = 1

To urlpatterns = […] in urls.py add,

url(r'^accounts/', include('allauth.urls')),

To see the accounts urls available to you go to localhost:8000/accounts/ be sure not to omit the trailing ‘/’.
Next run a migrate,

$ python manage.py migrate
python manag.py runserver

And go to localhost:8000/accounts/admin You should see in admin, ACCOUNTS, SITES and SOCIALACCOUNTS.
Selection_015

 

We need to go to the Facebook Developer Site. Signup to get account or login with your Facebook credentials if you already have a Facebook account.
In the `My Apps’ pull down menu select `Add a New App’
Selection_016

 

and then click on `Website`
Selection_017

Put in the name of your app which in my case is `crudapp`

 

Selection_018

then continue to `Create New Facebook App ID`. You’ll be asked for an email after that, choose a category for your app. Choose `Education’. Next click `Create App ID`

 

Selection_019

Skipped down to the bottom of the page

 

Selection_021
and for `Site URL:’ Put in localhost:8000, then click `Next’.
Then choose `Login’

 

Selection_022

 

Then go to the `Apps’ pulldown menu at the top right of the page and choose `crudapp`. This should take you to the app’s dashboard and you should see the app ID and secret key which you can reveal my clicking on the `Show’ button.

 

Selection_023

Now go back to your browser and navigate to localhost:8000/admin and go into `SITES` and click on `example.com`

 

Selection_024

 

Change the `Domain name` and `Display name` to the following,

Selection_025
Now localhost:800 has site ID: 1. Back in the admin home click on `Social applications’ under `SOCIALACCOUNTS’.

 

Selection_026
Next click `ADD SOCIAL APPLICATION` which is over on the top right of the screen. Fill in the `Provider`, `Name` and the `Client id` and `Secret key` which we looked at on the facebook developer site. Click the `localhost:8000` under `Available sites`.

 

Selection_027
Move `localhost:8000` across from `Available sites` to `Chosen sites` by clicking the little right arrow between them, while it is highlighted.

 

Selection_029
Then click `SAVE` at the bottom right of the page. Now logout of `admin and navigate to http://localhost:8000/accounts/login`
This should bring up a simple unstyled form,

 

Selection_030
You need to return to the Facebook developer site and under `App Review` set to `Yes` under `Make crudapp public?`

 

Selection_031
Logout of the Facebook developer site and all Facebook accounts and go back to http://localhost:8000/accounts/login/
This will bring you to http://localhost:8000/accounts/loggedin/#_=_ where you will see `Page not found (404)`. To correct this,
In settings.py add,

LOGIN_REDIRECT_URL = '/'

to redirect to the homepage after login.
Next add to your app the a `templates` directory and inside that a directory called `account`. In your directory where you setup your virtual environment you will find a lib directory from there go to,

lib/python2.7/site-packages/allauth/templates

where you should find a base.html templates file copy this to your app/templates directory.

$ cp base.html ~/djangoForum/djangoForum/crudapp/templates/

Then change directory into `account` and you will see html template files for login.html, logout.html, signup.html etc. These files need to be copied to your app’s templates/account directory.

$ cp -r * ~/djangoForum/djangoForum/crudapp/templates/account/

You can now style these templates.

Access the Python shell

Access the Python shell with,

$ python manage.py shell

This will take you to a Python shell which should look like this,
Selection_032
However,

$ python manage.py dbshell

will take you to a ‘sqlite’ shell if it is installed, which looks like this,
Selection_033
This can also be accessed outside of Django with,

$ sqlite3

Selection_034
So continuing now in the Python shell (not dbshell) connect with the models in your app using,

>>> from <appname>.models import <model1>, <model2>

Have a look at everything in the models1

models.objects.all()