Wednesday, October 23, 2019

The Dwarves of Northaxe

I recently ran a one-shot D&D adventure, in our DM's absence, using Crimson Dragon Slayer d20 rules. Posting here as a way to share the results with my group.

My dwarves (and I am taking liberties with DM Mike's world) generally have a few social attributes:
Full name: rarely used in normal speech.


  • Nickname: a short name, often used among friends, and to save time (dwarves love efficiency).
  • Clan name: this is a nickname used only among clan members, or truly close friends. A non-clan member using your clan name without permission is considered grossly impolite at best, and an unforgivable insult at worst. Often clan names will follow a pattern; for example, Mountainbelly clan names are always gemstones.
  • Beards: beards are not limited to male dwarves.


Players picked their own nickname and described their beards and personalities.

Cast of Characters:

Krobatryd Ironforged (Alex)
Major (Army, Sapper), Geologist
Fighter, 3
Primary: Constitution
Clan: Mountainbelly
Nickname: Batty
Clan Nickname: Garnet
Beard: Knee length, red. Peppered with small braids and carved gemstones.
Eldest of the dwarven command group. First saw action during the late days of the goblin wars. Lawful, committed to discipline and protecting her troops.

Garmotryd Merryshoulder (Matt)
Captain (Army, Cavalry), Botanist
Cleric, 2
Primary: Intelligence
Clan: Mountainbelly
Nickname: X
Clan Nickname: Tigereye
Beard: Black with vines woven in.
Born during the Goblin Wars. Clerical powers are drawn from the Mountain Heart, and leans toward Nature.

Norguibelyn Mountaincloak (Rab)
Lieutenant (Marine, Infantry), Historian
Fighter, 2
Primary: Dexterity
Clan: Stormspine
Nickname: Norg
Clan Nickname: Boots
Beard: Auburn/fall brown, red and blonde. Beaded with small human bones.
On loan to the Mountainbelly Marine detachment. Norguibelyn is Ulara's superior, by rank, and does not trust the duergar. Wrote the history of the adventure in reverse, attempting to drive an outcome?

Ulara Coalhewer (Jessica)
First Sergeant (Marine, Reconnaissance), Chemist
Thief, 2
Primary: Dexterity
Clan: Mountainbelly
Nickname: Spectre
Clan Nickname: Pearl
Beard: Lil' goatee
Ulara is a displaced duergar, who was taken in my the Mountainbelly clan. The duergar are not trusted, in general, but Ulara has proven worthy, rising in the ranks of the Royal Marines by taking on the most dangerous missions offered. Ulara's loyalty to the Mountainbelly matriach is unquestioned. The name "Pearl" is a cruel joke, as pearls are not considered gemstones by the dwarves.

Fotgruhilda Mountainbelly (NPC)
Cadet (Army, Infantry), Agriculture
Clan: Mountainbelly
Nickname: Hilda
Clan Nickname: Opal
Daughter of the Mountainbelly Martriarch, by way of a blood bond. A young human of fourteen summers. She is extremely pale, yet not sickly looking. When she mentions her age to other dwarves, they are generally confused, as fourteen years is very young for a dwarf.

The Adventure:

The dwarves were brought together by the Mountainbelly Matriarch. Northaxe is prospering. Their mining industry is strong, and the fleet of airships is raising the kingdom's prestige and wealth throughout the world. They are so prosperous, that they are running out of room. Mountainbelly has decided to take upon herself the job of finding new areas to settle. She has assembled a team to investigate a disused cavern. What is down there? Is it worth the effort of tunneling to it? Can dwarves survive there?

It is Opal's coming of age time, and she is to accompany the party in this expedition. There were twenty retainers (low ranking enlisted, several corporals, and some sergeants) sent to carry supplies and perform needed work.

The passage is narrow, and winds badly, making travel slow and difficult. Eventually a large cavern is found. There is a spring here, and mushrooms of the lumber variety. 'Captain X' determined that other vegetation could be grown here as well. The spring created a large pond, which held fish. The party made camp here. There seemed to be some danger from poor ventilation, so fires were kept to a minimum. A guard was set from among the enlisted, and the night passed without incident.

Two passages left this cavern. 'Spectre' followed one, and found it to be safe, continuing deeper into the mountain. The party followed (leaving four privates behind). As the stone began to change, 'Batty' was able to detect traces of silver in the rock. They found a new cavern, which had some moths and bats, plus lichen. The air smelled strongly (to a dwarf) of iron. Making camp again, a guard was set. The party woke to find 'Norg's' armor ruined. Tell-tale signs of a rust monster were visible. The privates on watch were given extra carry duty.

Spectre followed the exit, down again. Tracks of the rust monster were present. A small hole was found, which seemed to be a rust monster nest. Spectre was able to use bat guano and other agents to create an explosive, which was dropped into the nest. A boulder was found to block the hole, and the party continues down. There was some archaic graffiti, which seemed to warn of the rust monster nest, but the wording was strange.

Finally, they came to an area of a cave in. Batty was able to determine that this cave in was intentional. Along with Norg they remembered that during the Goblin Wars there was a passage sealed off, to prevent a back door into Northaxe. Could this be that passage?

The party decided to retrace their path to the first large cavern and try the other passage. They left a sergeant and some privates to begin building defenses. They camped in the iron-smelling room. Batty took remaining ruined armor and laid it in the entrance, in case another rust monster came.
Later, everyone woke, a bit dazed. An odd powder covered their faces. The guards were asleep, powdered as well. Three enlisted were missing, as was 'Opal'. There were clear tracks leading back to the cave in. Following quickly, they got to the cave in, and the troops left were dead; clearly ambushed. Inspection of the cave in showed a concealed, but difficult to traverse, passage. The commanders (PCs) followed.

On the other side all was quite. There was graffiti near the cave in with hash marks, as though someone was counting time. The counting seemed to go on a long time before ending... The graffiti near the cave in got to be angrier, cursing the dwarves above. Beyond, a roughly dwarven figure shuffled just outside of visual range. Spectre moved ahead, and saw a giant spider lurking. Holding an action, when the spider charged, Spectre struck it. Batty came up, slaying the spider with a critical hit.

The "dwarf" lurched forward, proving to be a skeleton. Batty shouted orders at the skeleton, which was in military garb. It became confused, and stopped. More skeletons appears behind it, as well as a wight, showing the rank of colonel. The wight shouted, "this is your fault!' and fired an arrow at Batty. It then threw down the bow, lifted its sword and advanced.

The fight went quickly. Captain X wounded a skeleton with holy power, but was then struck down. The skeletons did not go easily, until Batty slew the wight, which caused the skeletons to collapse. Beyond the party's vision, they heard a great rattling of bones. As they approached they saw inert skeletons laying everywhere, as well as Opal and the three privates tied up, but alive.

Epilogue:

Messengers were sent back to the surface to describe what was found, and what had happened. Among the purged undead, an aged, rotten journal was found, attributed to Colonel Tudratin Longhelm. The journal discusses how the cavern had been sealed off to prevent goblins from using the obscure passage as a backdoor into Northaxe. He and a battalion of volunteers stayed as a rear guard. Few knew of the passage, nor of the mission to guard it. Time wore on, and the goblins never came. Provisions began to run out, as well as fuel for fires. The dwarves began to go mad, and Longhelm began to blame his leadership for deserting him and his soldiers. His sense of betrayal began to warp his reason. He ordered his soldiers to begin digging their way out, but it was too late. Starvation and exposure took their toll. Longhelm went early, but his insanity and rage led him to become of the undead; a spirit of vengeance. He began turning his troops. He was still capable of some reason, and continued the journal for some time into his undeath. The digging continued, but the purpose was lost. The cavern was all they knew, so they stayed.

Further searching found the cavern holding Longhelm was massive. Another fissure was found, also caved in, which likely leads out of the mountain. The goal of the mission seems satisfied. The passage does lead to areas of use, and life could be sustained on this new frontier, but what do we say about what happened here?

Monday, August 19, 2019

Powershell 7 on Ubuntu, the Weird Way?

So, I work in Powershell for most everything, in a 99.999% Windows environment, but recently gained access to spin up development servers. Lo and behold, there were Linux servers available. I am a Debian/Ubuntu guy, so I was able to get a Run Level 3 Ubuntu server up in no time. My intention is to test Powershell 7 with my Windows DEV environment, but naturally, there were some hurdles.

First, I try to get Powershell 7, per the instruction at Microsoft.

# Download the Microsoft repository GPG keys
wget -q https://packages.microsoft.com/config/ubuntu/16.04/packages-microsoft-prod.deb

# Register the Microsoft repository GPG keys
sudo dpkg -i packages-microsoft-prod.deb

# nothing happens

# Update the list of products
sudo apt-get update

#dpkg: error processing archive packages-microsoft-prod.deb (--install):
# cannot access archive: No such file or directory
#Errors were encountered while processing:
# packages-microsoft-prod.deb

# Install PowerShell
sudo apt-get install -y powershell

#Reading package lists... Done
#Building dependency tree
#Reading state information... Done
#E: Unable to locate package powershell

# Start PowerShell
pwsh

#No command 'pwsh' found, did you mean:
# Command 'posh' from package 'posh' (universe)
# Command 'push' from package 'heimdal-clients' (universe)
# Command 'pdsh' from package 'pdsh' (universe)
# Command 'ppsh' from package 'ppsh' (universe)

#pwsh: command not found

Clearly that did not work. Sadly, wget and/or packages.microsoft.com are blocked from my new server. wget exists, but there is some conflict. Hmm.

wget -S --spider https://packages.microsoft.com/config/ubuntu/16.04/packages-microsoft-prod.deb
Spider mode enabled. Check if remote file exists.
--2019-08-19 14:40:34--  https://packages.microsoft.com/config/ubuntu/16.04/packages-microsoft-prod.deb
Resolving packages.microsoft.com (packages.microsoft.com)... 40.76.35.62
Connecting to packages.microsoft.com (packages.microsoft.com)|40.76.35.62|:443... failed: Connection refused.


But wait.. I was able to download Powershell 7 for Windows, so why not just download the .deb file and do it manually? One download and an sftp session later, and here I go:

#Use the actual package name
sudo dpkg -i powershell_6.2.0-1.ubuntu.16.04_amd64.deb
sudo apt-get install -f


Well, that was easy. Hey, how come...?

$ which powershell
$ which powershell-preview
$ which pwsh


I guess I need to find it? I am a bit lazy, so:

sudo find / -name pwsh
/opt/microsoft/powershell/7-preview/pwsh


So, add that to your PATH variable:

$ PATH=$PATH:/opt/microsoft/powershell/7-preview
$ echo $PATH
/home/vdev2/bin:/home/vdev2/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/opt/microsoft/powershell/7-preview
$ which pwsh
/opt/microsoft/powershell/7-preview/pwsh


So...

$ pwsh
PowerShell 7.0.0-preview.2
Copyright (c) Microsoft Corporation. All rights reserved.

https://aka.ms/powershell
Type 'help' to get help.

PS /home/dave>


That whole copyright/help section is a bit of a pain, though. Doing a quick info pwsh however, show me we can throw standard Powershell arguments at the Linux version.

$ pwsh -NoLogo
PS /home/vdev2>


So, we can change our .bashrc file to contain that switch and never see it again, and add an alias so we can use powershell, if we want:

alias pwsh='pwsh -NoLogo'
alias powershell='pwsh -NoLogo'

Then reload our profile, and we are done.


$ exec bash
$ pwsh
PS /home/vdev2>


Viola!

Monday, August 12, 2019

Powershell 5, 6, 7 -- Where am I?

So, I'm testing out Powershell Core (6.2.2) and Powershell 7 (7.0.0-preview.2) on a Windows system, and it occurred to me that I might want to know which version I am in while working. This seemed simple enough.

For more information, see the RTPSUG intro to 6.2/7.0 here.

I had already noticed that 6.2 and 7.0 do not use my 5.1 $PROFILE, so why not have the profile change the shell prompt?

function Global:prompt { "PS 5 | $PWD>" }

Add that to my 5.1 profile, and I get:

PS 5 | A:\>

I went to create a profile for 6.2 (basic instructions here), and added:

function Global:prompt { "PS 6 | $PWD>" }

And got:

PS 6 | A:\>

Nice. I then went to 7.0 and I see:

PS 6 | A:\>

Well... It seems that the way I installed 6.2 and 7.0 (or the default manner of the shell) has the two versions sharing a $PROFILE. So, we need to do this the way I should have done from the beginning. I change my two profiles to have the following:

function Global:prompt{ "PS $($PSVersionTable.PSVersion.Major)>" }

$PSVersionTable.PSVersion is a System.Version variable. So now I get the correct major version programmatically.

Although, there is an even better way to handle this...

Here are the two profile locations:

PS 5 | C:\Users\david\bin>$profile
C:\Users\david\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1

PS 6 | C:\Users\david\bin>$PROFILE
C:\Users\david\Documents\PowerShell\Microsoft.PowerShell_profile.ps1


I have many functions I use, and variables defined in my 5.1 profile. Do I want to replicate everything, every time? No. So, instead, let's do this to my 6.2/7.0 profile:

$PROFILE = "C:\Users\david\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1"
. $PROFILE


This re-assigns my 6.2/7.0 $PROFILE value to match that of my 5.2 $PROFILE value. I then dot-source my profile, and my 5.1 profile is loaded into whichever version I use.

Friday, August 2, 2019

Raspberry Pi 4 and Luks Encrypted Disks

I recently got a Raspberry Pi 4, which is USB-C compatible. I have a 1TB external drive, which was encrypted on a Mint 19 system (which is sadly no longer mine). "Cool! I can use it with my new Pi!"

First, I am asked for my passphrase, then:

Error: pcmanfm bd_crypto_luks_open_blob called but not implemented

Hmm...

I installed Disk-Manager, thinking that was the issue, but nope. This is an encrypt/decrypt problem. I found my first clue here.
Below error appears in Lubuntu 18.04 LTS

bd_crypto_luks_open_blob called, but not implemented

when opening Luks Encrypted volume using PCManFM, the Lubuntu File Manager.

To overcome this error, open a terminal window and enter the commands below:

$ sudo apt install libblockdev-crypto2 <enter>
$ systemctl restart udisks2.service <enter>


Ok. Restart the disk, enter the passphrase  and...

Error: Luks Encrypted error storing passphrase in keyring (the name org.freedesktop.secrets was not provided by any service files)

So, then I found this semi-related post...
I can confirm that sudo apt install gnome-keyring solves the problem on KDE. Looks like the problem is with node-keytar.

Now I was able to decrypt the disk and access it. However, I do not have permissions for the contents. I don't want to make any real changes to the drive, so I decided to open in a GUI file manager with elevated privileges.

sudo pcmanfm

This is not ideal, but slightly more convenient to traverse the drive.

Sunday, July 21, 2019

Chromium Browser for Linus (Debian)

https://tutorials-raspberrypi.com/google-chrome-for-raspberry-pi/

sudo apt-get install chromium-browser --yes

Operates similarly to Chrome, but feels a bit faster, particularly on a Raspberry Pi.

Wednesday, April 3, 2019

Linux Powershell $PROFILE

One of my favorite things in Windows Powershell is the $PROFILE. I always refer to the How-To Geek article for creating it.

But what about $PROFILE in Linux Powershell?

PS /home/david> Test-Path $PROFILE
False
PS /home/david> New-Item -Path $PROFILE -Type File -Force

    Directory: /home/david/.config/powershell

Mode                LastWriteTime         Length Name
----                -------------         ------ ----
------          3/29/19  10:26 AM              0 Microsoft.PowerShell_profile.ps1

PS /home/david> $PROFILE
/home/david/.config/powershell/Microsoft.PowerShell_profile.ps1


That was easy, but what do we do with this?

Well, for one thing, I notice that my terminal while in Powershell is named "Untitled".



That's an easy fix:

PS /home/david> Set-Content -Path $PROFILE -Value '$Host.UI.RawUI.WindowTitle = "Powershell"'
PS /home/david> Get-Content $PROFILE
$Host.UI.RawUI.WindowTitle = "Powershell"
PS /home/david> .$PROFILE


Now it looks like a shell should:


You could also add a function, allowing you to rename the terminal on demand. Add to your $PROFILE in an editor:

function Rename-Shell{
    param([string]$Name)
    $Host.UI.RawUI.WindowTitle = $Name
}


Then reload your $PROFILE and add a new title:

PS /home/david> .$PROFILE
PS /home/david> Rename-Shell -Name 'My Terminal'



Your  $PROFILE loads every time you start Powershell and imports the contents. If you change the $PROFILE you can reload it by "dot sourcing" (the .$PROFILE mentioned above). You can set variables, create functions, assign aliases... $PROFILE is a Powershell script, so if you can script it, you can add it.

Tuesday, April 2, 2019

Linux Powershell Issues: History

So, my Mint system has Powershell. "Let's see how it works," I say.

david@mint ~ $ powershell
PowerShell 6.1.3

https://aka.ms/pscore6-docs
Type 'help' to get help.

PS /home/david> Get-Location
Error reading or writing history file '/home/david/.local/share/powershell/PSReadLine/ConsoleHost_history.txt': Access to the path '/home/david/.local/share/powershell/PSReadLine/ConsoleHost_history.txt' is denied.
ưm
Path
----
/home/david


Well, that's no good. It ran the Get-Location command, but I certainly don't want to see all that red every time I perform an action in the shell. Fortunately, the error shows us where our history file is.

david@mint ~ $ ls -l /home/david/.local/share/powershell/PSReadLine/ConsoleHost_history.txt
-rw-r--r-- 1 root root 336 Mar 28 11:04 /home/david/.local/share/powershell/PSReadLine/ConsoleHost_history.txt


Why does root own my history file? I installed Powershell as root, but that still seems odd. Well, there's no going back, so let's take over the file.

david@mint ~ $ sudo chown david /home/david/.local/share/powershell/PSReadLine/ConsoleHost_history.txt
[sudo] password for david:
david@mint ~ $ sudo chgrp david /home/david/.local/share/powershell/PSReadLine/ConsoleHost_history.txt
david@mint ~ $ ls -l /home/david/.local/share/powershell/PSReadLine/ConsoleHost_history.txt
-rw-r--r-- 1 david david 336 Mar 28 11:04 /home/david/.local/share/powershell/PSReadLine/ConsoleHost_history.txt


Now, back to the shell:

david@mint ~ $ powershell
PowerShell 6.1.3
Copyright (c) Microsoft Corporation. All rights reserved.

https://aka.ms/pscore6-docs
Type 'help' to get help.

PS /home/david> Get-Location

Path
----
/home/david

PS /home/david> Get-History

  Id CommandLine
  -- -----------
   1 Get-Location

Monday, April 1, 2019

Sha-Bang Your Scripts

Sometimes, Windows can be a friendly scripting environment. You write a batch file, and you run it. You write a Powershell script, and provided your Execution Policy allows it, you run it. If it is a batch file, you can even double-click it in the GUI. Powershell, however, you either need to run in its shell, or wrap it in a batch script. The bash shell works a bit differently.

The Sha-Bang (Sharp Bang) declares to the shell what is needed to execute the script. A bash script (example.sh) would normally look like:

#!/bin/sh
echo "Hello, World!"


You would then add the executable flag:

chmod +x example.sh

The same can be done with other interpreters, such as python or perl:

#!/usr/bin/env python
print("Hello, World!")


Or:

#!/usr/bin/perl -w
print "Hello, World!\n"


The same can be done with Powershell on linux, provided your shell knows where Powershell lives.

$ which powershell
/snap/bin/powershell


If you get a response, then Powershell is in your env PATH, so you can have:

#!/usr/bin/env powershell
Write-Host "Hello, World!"


Powershell users in Windows will note that just as running a local script, you need a dot-slash:

$ ./example.ps1
Hello, World!

Friday, March 29, 2019

Accessing JSON Data and Powershell Objects within Strings

Yesterday we discussed getting JSON data from a SqLite3 database.

Using that data in Powershell means turning it into an object. Referencing it often means working with strings.

First, lets get our data back, and then convert it to an object

PS A:\> $MyData = Get-Content $TempCsv | ConvertFrom-Csv | ConvertTo-Json
PS A:\> $MyJson = $MyData | ConvertFrom-Json
PS A:\> $MyJson

name            twitter        url
----            -------        ---
David           dbsteimle      rhymeswithtimely.blogspot.com
Commander Candy codingComander codingcommanders.com

I want to turn this data into usable HTML. I could use the ConvertTo-HTML command:

PS A:\> $MyJson | ConvertTo-Html
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
  "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>HTML TABLE</title>
</head><body>
<table>
<colgroup><col/><col/><col/></colgroup>
<tr><th>name</th><th>twitter</th><th>url</th></tr>
<tr><td>David</td><td>dbsteimle</td><td>rhymeswithtimely.blogspot.com</td></tr>
<tr><td>Commander Candy</td><td>codingComander</td><td>codingcommanders.com</td></tr>
</table>
</body></html>

Which results in:

nametwitterurl
Daviddbsteimlerhymeswithtimely.blogspot.com
Commander CandycodingComandercodingcommanders.com

Some of that data is for link generation, though. My users shouldn't have to cut and paste a URL from an HTML document. Instead, let's treat Powershell like it were PHP, and loop through our object and create a proper table.

PS A:\> $OutPut = $null
PS A:\> $Output += "<table>`n<tr>`n<th>Name</th>`n<th>Twitter</th>`n<th>URL</th>`n</tr>`n"
PS A:\>
PS A:\> foreach($Record in $MyJson){
>> $OutPut += "<tr>`n"
>> $OutPut += "<th>$($Record.name)</th>`n"
>> $Output += "<td><a href='https://twitter.com/$($Record.twitter)'>@$($Record.twitter)</a></td>`n"
>> $Output += "<td><a href='http://$($Record.url)'>$($Record.url)</a></td>`n"
>> $OutPut += "</tr>`n"
>> }
PS A:\> $OutPut += "</table>"
PS A:\> $OutPut
<table>
<tr>
<th>Name</th>
<th>Twitter</th>
<th>URL</th>
</tr>
<tr>
<th>David</th>
<th><a href='https://twitter.com/dbsteimle'>@dbsteimle</a></th>
<th><a href='http://rhymeswithtimely.blogspot.com'>rhymeswithtimely.blogspot.com</a></th>
</tr>
<tr>
<th>Commander Candy</th>
<th><a href='https://twitter.com/codingComander'>@codingComander</a></th>
<th><a href='http://codingcommanders.com'>codingcommanders.com</a></th>
</tr>
</table>

Since $Record has properties we want to reference withing a string, we need to isolate it, as in $($Record.name) to allow the .name to parse properly.

Now, our resultant table has both twitter and webpage links, making the HTML more useful for our readers.

Name Twitter URL
David @dbsteimle rhymeswithtimely.blogspot.com
Commander Candy @codingComander codingcommanders.com

Thursday, March 28, 2019

Sqlite3 Export to JSON

I create custom scripted Detection Methods for SCCM utilizing JSON. Powershell handles JSON nicely, particularly if converting from an object. Powershell will do the necessary escaping of characters, which can be missed when manually creating JSON, or when you know there will be many characters to escape. For example, here is some data which would need escaping, and its resultant JSON:

PS A:\> $Example = @"
>> localpath,registry,share,text
>> C:\Temp,HKLM:\SOFTWARE,\\thatshare\me,use a`ttab
>> "@
PS A:\> $Example
localpath,registry,share,text
C:\Temp,HKLM:\SOFTWARE,\\thatshare\me,use a     tab
PS A:\> $Example | ConvertFrom-Csv | ConvertTo-Json
{
    "localpath":  "C:\\Temp",
    "registry":  "HKLM:\\SOFTWARE",
    "share":  "\\\\thatshare\\me",
    "text":  "use a\ttab"
}

My Detection Methods will often look for file checksums, which means a path to the file and the expected checksum.To avoid mistakes, and to make it easier on me, I create a SqLite3 database with the items for my JSON. Exporting from SqLite3 can be sent to a CSV file, and then parsed into JSON. A simple database example:

PS A:\> sqlite3 .\example.db
SQLite version 3.24.0 2018-06-04 19:24:41
Enter ".help" for usage hints.
sqlite> .mode line
sqlite> SELECT * FROM example;
   name = David
twitter = dbsteimle
    url = rhymeswithtimely.blogspot.com

   name = Commander Candy
twitter = codingComander
    url = codingcommanders.com
sqlite> .quit

Now, a new trick to me it utilizing temporary files. I am using the dot NET method, which is usable in Linux Powershell as well (your mileage may vary). You can create a temp file with [System.IO.Path]::GetTempFileName(). To use that file, you want to assign it to a variable.

PS A:\> $TempCsv = [System.IO.Path]::GetTempFileName()
PS A:\> $TempCsv
C:/Users/david/AppData/Local/Temp/tmp47AD.tmp

Next, because SqLite3 does not use \ character in its paths, we need to change them to /.

PS A:\> $TempCsv = $TempCsv.Replace("\","/")
PS A:\> $TempCsv
C:/Users/david/AppData/Local/Temp/tmp47AD.tmp

Now we can create our SQL commands. This could also be a file, but I will use a here-string instead.

PS A:\> $TempSql = @"
>> .headers on
>> .mode csv
>> .output $TempCsv
>> SELECT name,
>>        twitter,
>>        url
>> FROM example;
>> .quit
>> "@
PS A:\> $TempSql
.headers on
.mode csv
.output C:/Users/david/AppData/Local/Temp/tmp47AD.tmp
SELECT name,
       twitter,
       url
FROM example;
.quit

Notice our converted $TempCsv value is in the $TempSql here-string.

Now, pipe the $TempSql into SqLite3:

PS A:\> $TempSql | sqlite3 .\example.db

Our $TempCsv frile will now have the output from SqLite3 in CSV format.

PS A:\> gc $TempCsv
name,twitter,url
David,dbsteimle,rhymeswithtimely.blogspot.com
"Commander Candy",codingComander,codingcommanders.com

We can now use this CSV formatted data, but must convert it to a Powershell Object.

PS A:\> Get-Content $TempCsv | ConvertFrom-Csv | ConvertTo-Json
[
    {
        "name":  "David",
        "twitter":  "dbsteimle",
        "url":  "rhymeswithtimely.blogspot.com"
    },
    {
        "name":  "Commander Candy",
        "twitter":  "codingComander",
        "url":  "codingcommanders.com"
    }
]

Wednesday, March 27, 2019

Powershell on Linux

Process from https://websiteforstudents.com/install-microsoft-powershell-core-for-linux-on-ubuntu-16-04-18-04/

The only process that worked for me, however, was the snapd approach:

sudo apt update
sudo apt install snapd
sudo snap install powershell --classic


One issue, though, is that I must sudo powershell to launch.

EDIT: to use powershell as a normal user, with the snap install, you need to add /snap/bin to your path

david@mybox ~ $ sudo which pwsh
/snap/bin/pwsh
david@mybox ~ $ PATH=$PATH:/snap/bin



Distributor ID: LinuxMint
Description: Linux Mint 18.3 Sylvia
Release: 18.3
Codename: sylvia

Name                           Value
----                           -----
PSVersion                      6.1.3
PSEdition                      Core
GitCommitId                    6.1.3
OS                             Linux 4.15.0-46-generic #49~16.04.1-Ubuntu SMP Tue Feb 12 17:45:24 UTC 2019
Platform                       Unix
PSCompatibleVersions           {1.0, 2.0, 3.0, 4.0...}
PSRemotingProtocolVersion      2.3
SerializationVersion           1.1.0.1
WSManStackVersion              3.0

Tuesday, March 26, 2019

Using AVCONV to Extract Frames

I seem to always refer back to this page when I want to extract images from a video, either fir animation or thumbnails:

https://dototot.com/how-to-extract-images-from-a-video-with-avconv-on-linux/


Tuesday, March 19, 2019

Security by Dior


I have been reading Dior by Dior, which is a great book for project management (particularly the discussion of creating the season's line). His discussion of security and piracy were very interesting, and smacks a lot of IT security.

Note that the term 'model', when used by Dior here, indicates a dress/suit, rather than a person wearing it.
There are five classical methods by which dresses are copied, of which the most distasteful is naturally that which originates with the treachery of a member of the staff.
This is, of course the classic insider-threat; disgruntled employees, employees who have been compromised in some way. They are providing information to either competitors or counterfeiters.
The second method of copying takes place at a much later stage when the collection is shown to the Press. In most cases, what takes place is really a regrettable abuse of the details of the collection which are released, rather than proper copying.
Dior defines how the press are sworn to non-disclosure at the season launch. Naturally, the line must be shown to the press to stir interest. He has agents on the floor watching for sketch artists. "Write all you like, but don’t draw!" is the rule
The third method of copying, and probably the most common, is the work of our clients, who thus show little regard for the rules of the profession. They sketch secretly some of the models which have been shown to them, pretending that they are trying to memorize the general look of a dress, whose name they did not catch. When they are discovered, we either make them buy the disputed model, or confiscate their sketches and keep their caution money.
The 'caution money' is further described as a trust against clients (in this case retailers) who view the line, but make no purchases.
[The fourth method] The 'model renters' had their heyday in the years just after the war, and it was not until 1948 that their ringleader, a particularly cunning woman, was detected. She had bought through several intermediaries – generally private clients – the best models from the great Parisian couture houses. On her return to New York, she organized a miniature show of her own.
The model renters would rent garments from couture houses, often several of them, for the purposes of selling to retailers and manufacturers under false pretenses, copying the model to provide a lesser-quality version.
[The fifth method] The ‘model renter’ was succeeded by the publisher of albums of sketches. The prosperity of this form of fraud – the most widespread which has ever been practised – is partly explained by the difference in legislation between France and America – the laws of the latter country being much looser as regards the artistic reproduction of commercial property.
This one is either the most amazing, or least understood by Dior (and he certainly knew his business). As with the second method, these people are monitored to prevent them from making sketches (and he discusses how other clients act as enforcers, for they do not wish to buy a model which will potentially be pirated). They are either memorizing models with amazing accuracy, or in some cases using technology. Pirates had been caught with miniature cameras.

Counter piracy measures were interesting, such as the use of invisible ink with serial numbers and secret stitching markers to aid in discovering pirates. At some of the fake shows mentioned in methods fourth and fifth, agents of couture houses will attend in the guise of pirates, purchase/examine the item offered for copy, and then trace the sale to the original buyer.

There are numerous other instances where the book provides good information for either working with creatives, or working under a manager as a creative. In an IT perspective, Dior is a developer who does not develop, but has a team of trusted agents who are experts at what they do. His relationship with them is important to his success. Well worth reading.

Thursday, February 28, 2019

Using Powershell to Get Objects from Sqlite3

As discussed in SQLite3 and Powershell Redirection, we do not need to add any special Powershell tools to access sqlite3 if we utilize piping.

Versioning on my system:
SQLite version 3.24.0 2018-06-04 19:24:41
PSVersion 5.1.14393.2791
Note: I create an Alias for sqlite3:

PS A:\sqlite_to_csv> Set-Alias -Name sqlite3 -Value A:\sqlite3\sqlite3.exe

Creating an Object in Powershell is generally the best way to utilize data, and in my case I need an easy way to generate JSON files, which Powershell can be quite helpful at.

First, let's make a sample database:

PS A:\sqlite_to_csv> sqlite3 sample.db
SQLite version 3.24.0 2018-06-04 19:24:41
Enter ".help" for usage hints.
sqlite> CREATE TABLE sample (
   ...> id INTEGER PRIMARY KEY AUTOINCREMENT,
   ...> name TEXT,
   ...> twitter TEXT,
   ...> blog TEXT
   ...> );
sqlite> INSERT INTO sample
   ...> (name,twitter,blog)
   ...> VALUES
   ...> ('David Steimle','dbsteimle','rhymeswithtimely.blogspot.com');
sqlite> SELECT * FROM sample;
1|David Steimle|dbsteimle|rhymeswithtimely.blogspot.com

Now we have something to query.

First, we want to create the header, which will provide output in CSV mode with headers. For this I use a Here-String.

PS A:\sqlite_to_csv> $QuerySetup = @"
>> .mode csv
>> .headers on
>>
>> "@

Then we can define our query; we will do a simple SELECT only, for now, and combine it with our query setup.

PS A:\sqlite_to_csv> $Query = "SELECT * FROM sample;"
PS A:\sqlite_to_csv> $MySql = "$QuerySetup$Query"
PS A:\sqlite_to_csv> Write-Host $MySql
.mode csv
.headers on
SELECT * FROM sample;

We then assign our result to an Object with pipes:

PS A:\sqlite_to_csv> $MyObject = $MySql | sqlite3 .\sample.db | ConvertFrom-Csv
PS A:\sqlite_to_csv> $MyObject | Format-List

id      : 1
name    : David Steimle
twitter : dbsteimle
blog    : rhymeswithtimely.blogspot.com

This Object is now usable in all the normal ways, such as turning it into JSON (I will output to a file, as that would be my end goal):

PS A:\sqlite_to_csv> $MyObject | ConvertTo-Json | Set-Content sample.json
PS A:\sqlite_to_csv> Get-Content .\sample.json
{
    "id":  "1",
    "name":  "David Steimle",
    "twitter":  "dbsteimle",
    "blog":  "rhymeswithtimely.blogspot.com"
}

While in a basic sense this works for me, I do need the ability to have sub-objects and arrays in my JSON. This means additional tables. The obvious place to start in this example would be with Social Media. First, we will make a table of social media sites:

sqlite> CREATE TABLE socialmedium (
   ...> id INTEGER PRIMARY KEY AUTOINCREMENT,
   ...> name TEXT,
   ...> url TEXT
   ...> );
sqlite> INSERT INTO socialmedium (name,url) VALUES
   ...> ('Twitter','https://twitter.com/');
sqlite> INSERT INTO socialmedium (name,url) VALUES
   ...> ('Instagram','https://instagram.com/');
sqlite> SELECT * FROM socialmedium;
id          name        url
----------  ----------  --------------------
1           Twitter     https://twitter.com/
2           Instagram   https://instagram.co

Then a lookup table with user details:

sqlite> CREATE TABLE socialuser (
   ...> id INTEGER PRIMARY KEY AUTOINCREMENT,
   ...> user INTEGER,
   ...> site INTEGER,
   ...> username TEXT
   ...> );
sqlite> INSERT INTO socialuser (user,site,username) VALUES
...> (1,1,'dbsteimle');
sqlite> SELECT * FROM socialuser;
id user site username
---------- ---------- ---------- ----------
1 1 1 dbsteimle


These tables use the PRIMARY KEY from the sample table as a reference point. To work with out current Object, we need to add a new Property, and assign it as an empty hash table:

PS A:\sqlite_to_csv> $MyObject | Add-Member -NotePropertyName SocialMedia -NotePropertyValue @{}
PS A:\sqlite_to_csv> $MyObject

id          : 1
name        : David Steimle
twitter     : dbsteimle
blog        : rhymeswithtimely.blogspot.com
SocialMedia : {}

We need to use an INNER JOIN in this scenario, since we are looking up values in two lookup tables:

PS A:\sqlite_to_csv> $Query = @"
>> SELECT user,name,url,username
>> FROM socialuser
>> INNER JOIN socialmedium ON socialmedium.id=socialuser.site
>> WHERE user=$($MyObject.id);
>> "@
PS A:\sqlite_to_csv> $MySql = "$QuerySetup$Query"
PS A:\sqlite_to_csv> Write-Host $MySql
.mode csv
.headers on
SELECT user,name,url,username
FROM socialuser
INNER JOIN socialmedium ON socialmedium.id=socialuser.site
WHERE user=1;

Now, pipe through sqlite3:

PS A:\sqlite_to_csv> $MyObject.SocialMedia = $MySql | sqlite3 .\sample.db | ConvertFrom-Csv
PS A:\sqlite_to_csv> $MyObject.SocialMedia

user name    url                  username
---- ----    ---                  --------
1    Twitter https://twitter.com/ dbsteimle

And Conversion to JSON:

PS A:\sqlite_to_csv> $MyObject | ConvertTo-Json | Set-Content sample.json
PS A:\sqlite_to_csv> Get-Content .\sample.json
{
    "id":  "1",
    "name":  "David Steimle",
    "twitter":  "dbsteimle",
    "blog":  "rhymeswithtimely.blogspot.com",
    "SocialMedia":  {
                        "user":  "1",
                        "name":  "Twitter",
                        "url":  "https://twitter.com/",
                        "username":  "dbsteimle"
                    }
}

To best utilize this, we would want a function where we could utilize a known value as a parameter. We may not know that id 1 matches name 'David Steimle', but if we know the database, we might know to look for name="David Steimle"

function Get-MyObject {
param( [string]$Name )

$QuerySetup = @"
.mode csv
.headers on

"@

$Query = "SELECT * FROM sample WHERE name='$Name';"
$MySql = "$QuerySetup$Query"
$Temp = $MySql | sqlite3 .\sample.db | ConvertFrom-Csv

$Temp | Add-Member -NotePropertyName SocialMedia -NotePropertyValue @{}

$Query = @"
SELECT user,name,url,username
FROM socialuser
INNER JOIN socialmedium ON socialmedium.id=socialuser.site
WHERE user=$($Temp.id);
"@
$MySql = "$QuerySetup$Query"
$Temp.SocialMedia = $MySql | sqlite3 .\sample.db | ConvertFrom-Csv

$Temp | ConvertTo-Json | Set-Content ".\$Name.json"

return $Temp
}

PS A:\sqlite_to_csv> Get-MyObject -Name "David Steimle"

id          : 1
name        : David Steimle
twitter     : dbsteimle
blog        : rhymeswithtimely.blogspot.com
SocialMedia : @{user=1; name=Twitter; url=https://twitter.com/; username=dbsteimle}

PS A:\sqlite_to_csv> Get-Content '.\David Steimle.json'
{
    "id":  "1",
    "name":  "David Steimle",
    "twitter":  "dbsteimle",
    "blog":  "rhymeswithtimely.blogspot.com",
    "SocialMedia":  {
                        "user":  "1",
                        "name":  "Twitter",
                        "url":  "https://twitter.com/",
                        "username":  "dbsteimle"
                    }
}
Or, assign the result to a variable:

PS A:\sqlite_to_csv> $DavidSteimle = Get-MyObject -Name 'David Steimle'
PS A:\sqlite_to_csv> $DavidSteimle

id : 1
name : David Steimle
twitter : dbsteimle
blog : rhymeswithtimely.blogspot.com
SocialMedia : @{user=1; name=Twitter; url=https://twitter.com/; username=dbsteimle}

Friday, February 22, 2019

SQLite3 Output to CSV

Info from: http://www.sqlitetutorial.net/sqlite-tutorial/sqlite-export-csv/

Run below in sqlite3, or create myexport.sql

.headers on
.mode csv
.output myexport.csv
SELECT * FROM mytable;
.quit

If myexport.sql, pipe through Powershell:

gc .\myexport.sql | sqlite3 mydatabase.db

If you want to look at the CSV as an object, execute:

$MyObject = gc myexport.csv | ConvertFrom-Csv

More about Powershell and SQLite3 inegration here.

Wednesday, January 23, 2019

Powershell and Creating Event Logs

While logging to a text file is a good way to capture script activity, creating an event log can be more useful. Using Powershell, you can have a script utilize (or create) an event log with a custom source, and then write events to it. I found this article helpful: How To Use PowerShell To Write to the Event Log.

First, you want to determine what Log Name and custom Source you want to use/create. Since my scripts typically affect applications, I use the Application log, but System might be good in some instances. For this example I will use "SteimleEvents" as my new custom source.

New-EventLog -LogName Application -Source SteimleEvents

We could then verify the Source is working with the log by running:

Get-EventLog -Logname Application -Source SteimleEvents -ErrorAction SilentlyContinue

This is fine the first time, but what if I have a new script which will utilize this Source? I would build logic into the script to check for the Source, and if it does not exist, create it. This can be tricky, because an empty source and a non-existent source give the same error. In our logic below.

EDIT: Note that my writing logic does not match my scriping logic, and the function New-EventLogEntry is required, which is provided below (highlighting red). The entire script flow is included at the end of this post.

$Source = 'SteimleEvents'
if(Get-EventLog -Logname Application -Source $Source -ErrorAction SilentlyContinue){
    # this indicates that the log, and a 
    # log entry were found for the if
    # condition
    New-EventLogEntry -Information -Message "There is an existing event log for $Source"
} else {
    # the if was false, so we try to 
    # create the log/source, and pass the
    # error to a variable
    New-EventLog -LogName Application -Source $Source -ErrorAction SilentlyContinue -ErrorVariable TestSource
    if($TestSource -match 'already registered'){
        # if a match is found, then the log
        # exists, so we log that
        New-EventLogEntry -Information -Message "There is an existing event log for $Source"
    } else {
        New-EventLogEntry -Information -Message "Initializing event log for $Source"
    }

}

Now that we have our log, we can start utilizing it. I have created two functions and a preliminary hashtable for parameter:

# Define basic event parameters
$EventParameters = @{
    'LogName' = 'Application'
    'Source' = $Source
}
# Function to clear added parameters
function Initialize-EventParameters{
    $script:EventParameters = @{
        'LogName' = 'Application'
        'Source' = $script:Source
    }
}
# Function to create an eventlog entry
function New-EventLogEntry{
    param(
        [switch]$Error,
        [switch]$Warning,
        [switch]$Information,
        [string]$Message
    )
    if($Error){
        $EventID = 1113
        $EntryType = 'Error'
    } elseif($Warning){
        $EventID = 1112
        $EntryType = 'Warning'
    } else {
        $EventID = 1111
        $EntryType = 'Information'
    }
    Initialize-EventParameters
    $script:EventParameters += @{
        'EventId' = $EventID
        'EntryType' = $EntryType
        'Message' = $Message
    }
    Write-EventLog @EventParameters
}

The hashtable $EventParameters is created as an initialization in the script-level scope.

The function Initialize-EventParameters is called to reset $EventParameters to its initialized values.

Finally, New-EventLogEntry adds an event log entry. The function accepts three parameters:

  • Error
  • Warning
  • Information
  • Message
Including switches for 'EntryType' will make decisions based on priority. I am not great with parameters, so if you call -Error and -Information the decision tree will make your entry an Error. The -Message switch includes what you want the log entry to say. So use of the function would look like:

New-EventLogEntry -Error -Message "Oh no! Something went wrong!"

Or, you could call -ErrorVariable on every commandlet, and if it has a length, log it. Note that not all commandlets return errors, Test-Path does not, but Test-Connection does.

Get-Content C:\Temp\NotARealFile.txt -ErrorVariable result
if($result.Length -gt 0){
    New-EventLogEntry -Error -Message "C:\Temp\NotARealFile.txt not found"
}

Entire Script Section


# Define Source
$Source = "SteimleEvents"
# Define basic event parameters
$EventParameters = @{
    'LogName' = 'Application'
    'Source' = $Source
}
# Function to clear added parameters
function Initialize-EventParameters{
    $script:EventParameters = @{
        'LogName' = 'Application'
        'Source' = $script:Source
    }
}
# Function to create an eventlog entry
function New-EventLogEntry{
    param(
        [switch]$Error,
        [switch]$Warning,
        [switch]$Information,
        [string]$Message
    )
    if($Error){
        $EventID = 1113
        $EntryType = 'Error'
    } elseif($Warning){
        $EventID = 1112
        $EntryType = 'Warning'
    } else {
        $EventID = 1111
        $EntryType = 'Information'
    }
    Initialize-EventParameters
    $script:EventParameters += @{
        'EventId' = $EventID
        'EntryType' = $EntryType
        'Message' = $Message
    }
    Write-EventLog @EventParameters
}
# test for existing event log for this application; if it does not exist, create it
if(Get-EventLog -Logname Application -Source $Source -ErrorAction SilentlyContinue){
    New-EventLogEntry -Information -Message "There is an existing event log for $Source"
} else {
    New-EventLog -LogName Application -Source $Source -ErrorAction SilentlyContinue -ErrorVariable TestSource
    if($TestSource -match 'already registered'){
        New-EventLogEntry -Information -Message "There is an existing event log for $Source"
    } else {
        New-EventLogEntry -Information -Message "Initializing event log for $Source"
    }
}


Tuesday, January 22, 2019

SQLite3 and Powershell Redirection

I have a terrible memory, and always forget how this goes.

(Full disclosure, I am using Powershell 5.1.14393.2636/Desktop, and SQLite version 3.24.0)

My primary database at work is SQLite3, and my shell is Powershell. Sometimes I need to grab a set of information from a number of systems, or over a period of time, and record them for analysis. I could use a PS-Object for this, but what if my system crashes or is rebooted by IT? If I use a database I can at least get all data up-until the crash/reboot occurs. So, without adding any fancy connectors from github, lets just pipe.

All of these operations could be performed with a pipe, but I created a sample database in sqlite3. Note that I have created an alias to the executable.

PS A:\> sqlite3 .\sample.db
SQLite version 3.24.0 2018-06-04 19:24:41
Enter ".help" for usage hints.
sqlite> .mode column
sqlite> .headers on
sqlite> CREATE TABLE sample (data TEXT);
sqlite> INSERT INTO sample (data) VALUES ('this');
sqlite> SELECT * FROM sample;
data
----------
this
sqlite> .quit

Now, I like to put text into a $() to variableize it in Powershell. Let's insert a new row and verify:

PS A:\> $("INSERT INTO sample (data) VALUES ('is');") | sqlite3 .\sample.db
PS A:\> sqlite3 .\sample.db
SQLite version 3.24.0 2018-06-04 19:24:41
Enter ".help" for usage hints.
sqlite> .mode column
sqlite> .headers on
sqlite> SELECT * FROM sample;
data
----------
this
is
sqlite> .quit

How about two more?

PS A:\> $("INSERT INTO sample (data) VALUES ('an');") | sqlite3 .\sample.db
PS A:\> $("INSERT INTO sample (data) VALUES ('example');") | sqlite3 .\sample.db

In the above instances we are only performing an INSERT query. If we wanted to do a SELECT query with some style options, we need to create a file (often with a .sql extension):

PS A:\> vi sample.sql
.mode column
.headers on
SELECT * FROM sample;
:wq

Now, our variableized query is in the form of Get-Content:

PS A:\> $(Get-Content .\sample.sql) | sqlite3 .\sample.db
data
----------
this
is
an
example
PS A:\>

If we want our query in a hashtable, we need to make a few changes. First, we need the mode to be CSV:

PS A:\> vi sample.sql
.mode csv
.headers on
SELECT * FROM sample;
:wq

Next, we need to add another pipe to our string:

PS A:\> $samples = $(Get-Content .\sample.sql) | sqlite3 .\sample.db | ConvertFrom-Csv

This will assign the queries output to a hashtable named $samples.

PS A:\> $samples

data
----
this
is
an
example

PS A:\> $samples[0]

data
----
this

PS A:\>

Let's expand that hashtable a bit, by adding a new column:

PS A:\> $("ALTER TABLE sample ADD COLUMN data2 TEXT") | sqlite3 .\sample.db
PS A:\> $("UPDATE sample SET data2='database' WHERE data='this'") | sqlite3 .\sample.db
PS A:\> $("UPDATE sample SET data2='not' WHERE data='is'") | sqlite3 .\sample.db
PS A:\> $("UPDATE sample SET data2='excellent' WHERE data='an'") | sqlite3 .\sample.db
PS A:\> $("UPDATE sample SET data2='of a database' WHERE data='example'") | sqlite3 .\sample.db

PS A:\> vi .\sample.sql
.mode csv
.headers on
SELECT * FROM sample;

PS A:\> $samples = $(Get-Content .\sample.sql) | sqlite3 .\sample.db | ConvertFrom-Csv
PS A:\> $samples

data    data2
----    -----
this    database
is      not
an      excellent
example of a database

Now, editing that .sql file might be a bit of a pain, especially if you use an external editor, or are working on a remote machine. Two options are available.

First, escape the newlines with `n notation. Not that you must use double quotes:

PS A:\> $query = ".mode column`n.headers on`nSELECT * FROM sample;"
PS A:\> $query
.mode column
.headers on
SELECT * FROM sample;
PS A:\> $query | sqlite3 .\sample.db
data        data2
----------  ----------
this        database
is          not
an          excellent
example     of a datab

Second, you can use a here string:

PS A:\> $query = @"
>> .mode column
>> .headers on
>> SELECT * FROM sample;
>> "@
PS A:\> $query | sqlite3 .\sample.db
data        data2
----------  ----------
this        database
is          not
an          excellent
example     of a datab

For a slightly more practical example, let's grab all the DLL files in C:\Windows\System32, and create a database:

$testfiles = Get-ChildItem C:\Windows\System32\*.dll
PS A:\> $query = @"
>> CREATE TABLE dllfiles (
>> lastwritetime TEXT,
>> length INT,
>> name TEXT
>> );
>> "@
PS A:\> $query | sqlite3 dllfiles.db
PS A:\> $('.schema') | sqlite3 dllfiles.db
CREATE TABLE dllfiles (
lastwritetime TEXT,
length INT,
name TEXT
);

Note that .schema is a command to see table information in SQLite3.

Now, let's loop through $testfiles, and populate our database. My system shows 3229 such files.

PS A:\> foreach($file in $testfiles){
>> $("INSERT INTO dllfiles (lastwritetime,length,name) VALUES ('$($file.lastwritetime)',$($file.length),'$($file.name)');") | sqlite3 dllfiles.db
>> }

So, now we can query those 10 files as above from the database:

PS A:\> $query = @"
>> .mode column
>> .headers on
>> SELECT name,length FROM dllfiles LIMIT 10;
>> "@
PS A:\> $query | sqlite3 dllfiles.db
name               length
-----------------  ----------
aadauthhelper.dll  34816
aadcloudap.dll     425984
aadtb.dll          1122304
AagMmcRes.dll      26112
AboveLockAppHost.  284672
accelerometerdll.  53280
accessibilitycpl.  3825152
accountaccessor.d  322048
AccountsRt.dll     441856
ACCTRES.dll        39936

One use this might serve is to add a column for "new_length" or "delta_length", which could be populated after monthly security updates have been applied. You could then query where length != new_length. You would just change the query above (after adding the column) to:

PS A:\> $testfiles = Get-ChildItem C:\Windows\System32\*.dll
PS A:\> foreach($file in $testfiles){
>> $("UPDATE dllfiles SET new_length=$($file.length) WHERE name='$($file.name)');") | sqlite3 dllfiles.db
>> }