The AWS Marketplace Race Condition Nobody Warns You About

If you’re building a SaaS product on AWS Marketplace, there’s a subtle bug waiting for you in the subscription flow. It won’t show up in testing. It won’t throw an error. Your customer will just land on a broken page, and you’ll spend hours figuring out why.

I’ve shipped 4 SaaS products on AWS Marketplace. This race condition bit me on the first one. Here’s what it is and how to fix it.

How AWS Marketplace Subscription Works

When a customer subscribes to your SaaS product on AWS Marketplace, two things happen:

Flow A: The redirect. The customer clicks “Subscribe” and AWS sends them to your fulfillment URL with a registration token. You call ResolveCustomer to validate it, create a tenant record in your database, and redirect them to your signup page.

Flow B: The SQS notification. AWS also drops a subscribe-success message into your SQS queue. Your backend polls this queue and uses it to update the tenant’s subscription status.

Here’s the problem: these two flows are completely independent. AWS does not guarantee ordering between them.

The Race

The happy path looks like this:

1. Customer clicks Subscribe on AWS Marketplace
2. Customer is redirected to your /register endpoint
3. You call ResolveCustomer, create tenant row (status: subscribed)
4. Customer completes signup
   ... minutes later ...
5. SQS delivers subscribe-success
6. You UPDATE the tenant row -> status stays 'subscribed' (no-op)

Everything works. But here’s what actually happens sometimes:

1. Customer clicks Subscribe on AWS Marketplace
2. SQS delivers subscribe-success            <-- this arrives FIRST
3. You try to UPDATE the tenant row
4. ... but the row doesn't exist yet
5. UPDATE affects 0 rows. No error. Silent failure.
6. SQS message is deleted from the queue.    <-- it's gone now
   ... seconds later ...
7. Customer is redirected to your /register endpoint
8. You call ResolveCustomer, create tenant row
9. But you missed the subscribe-success event
10. What status do you set?

The SQS event arrived before your customer did. Your UPDATE hit nothing. The message was deleted from the queue. And now you have a customer with no subscription status, or worse, a customer stuck on a “subscription pending” screen forever.

This isn’t a theoretical edge case. It happens in production. The time between the customer clicking Subscribe and actually landing on your registration page can vary wildly – they might have a slow connection, they might get distracted, or your redirect might take a few seconds while SQS delivers in milliseconds.

The Wrong Fix

The obvious fix is: “Just don’t delete the SQS message if the tenant doesn’t exist yet. Let it retry.”

This is fragile. You’re now relying on SQS redelivery timing. If the customer takes 5 minutes to complete the redirect, you’re burning SQS visibility timeouts and retries. If they never complete registration, you have a poison message bouncing forever. And you’ve coupled your SQS processing to the state of a completely separate HTTP flow.

The Fix: Event Sourcing Lite

The solution is to decouple the two concerns:

  1. Always persist the SQS event, regardless of whether the tenant exists.
  2. Reconcile at registration time by reading the event history.

Here’s how it works in practice.

Step 1: Always save the event

When an SQS message arrives, write it to a subscription_events table first, unconditionally. Then attempt to update the tenant:

saveSubscriptionEvent(message) {
    const { action, customerIdentifier, productCode } = message;

    // Always write to the audit log -- this never fails
    db.subscriptionEvents.add(action, customerIdentifier, productCode, message);

    // Attempt to update the tenant (may not exist yet)
    if (action === 'subscribe-success') {
        const result = db.customers.updateSubscriptionStatus(
            customerIdentifier, 'subscribed'
        );
        if (result.changes === 0) {
            // Tenant hasn't registered yet. That's fine.
            // The event is safely persisted in subscription_events.
            logger.warn(
                `Customer ${customerIdentifier} not found in tenants table. ` +
                `Status will be reconciled at registration time.`
            );
        }
    }

    // Delete from SQS -- safe because the event is persisted locally
    this.deleteMessage(message);
}

The key insight: the subscription_events table is your durable log. It doesn’t depend on any other table existing. The SQS message can be safely deleted because the information has been transferred to your database.

Step 2: Reconcile at registration

When the customer finally hits /register, check the event history before creating the tenant:

// POST /register
app.post('/register', async (req, res) => {
    const { customerIdentifier, customerAWSAccountId } =
        await resolveCustomer(req.body.token);

    const existingTenant = db.customers.getByAwsAcctId(customerAWSAccountId);
    if (existingTenant) {
        // Returning customer -- redirect to login
        return res.redirect('/login');
    }

    // New customer -- check if SQS events arrived before they did
    const latestEvent = db.subscriptionEvents.getLatestByCustomer(
        customerIdentifier
    );
    const subscriptionStatus = latestEvent?.action === 'unsubscribe-success'
        ? 'unsubscribed'
        : 'subscribed';

    db.customers.add(
        customerAWSAccountId,
        customerIdentifier,
        offerType,
        subscriptionStatus  // <-- reconciled from event history
    );

    res.redirect('/signup');
});

The query is simple:

SELECT action, customer_identifier, created_at
FROM subscription_events
WHERE customer_identifier = ?
ORDER BY created_at DESC
LIMIT 1

If a subscribe-success event exists, the tenant is created as subscribed. If somehow an unsubscribe-success is the latest event, the tenant is created as unsubscribed. If no events exist yet (normal flow where the customer arrived before SQS), the default is subscribed – which is correct because ResolveCustomer itself validates that the subscription is active.

Why This Works

The subscription_events table acts as a write-ahead log. It decouples event persistence from tenant existence. No matter what order things happen:

Normal order (customer registers first):

/register creates tenant as 'subscribed' (default)
SQS arrives later, UPDATEs tenant -> no-op, already correct

Race condition (SQS arrives first):

SQS handler writes to subscription_events, UPDATE hits 0 rows -> that's fine
/register reads subscription_events, finds subscribe-success
Creates tenant as 'subscribed' -> correct

Edge case (unsubscribe before register):

SQS delivers unsubscribe-success, persisted to subscription_events
Customer visits /register
Latest event is unsubscribe-success -> tenant created as 'unsubscribed'
Access correctly denied

Every path converges to the correct state.

The Schema

You need one extra table:

CREATE TABLE subscription_events (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    action TEXT NOT NULL,
    customer_identifier TEXT NOT NULL,
    product_code TEXT NOT NULL,
    offer_identifier TEXT,
    raw_payload TEXT,
    created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);

CREATE INDEX idx_subscription_events_customer
ON subscription_events(customer_identifier, created_at DESC);

The descending index on created_at makes the “get latest event” query fast. The raw_payload column stores the full SQS message body – useful for debugging and audit.

Bonus: You Get an Audit Trail for Free

This pattern gives you a complete history of every subscription lifecycle event. When a customer opens a support ticket saying “I subscribed but can’t access the product,” you can query:

SELECT action, created_at
FROM subscription_events
WHERE customer_identifier = 'cust-abc-123'
ORDER BY created_at DESC;
subscribe-success     2024-03-15 14:23:01
unsubscribe-pending   2024-03-15 14:22:58
subscribe-success     2024-01-10 09:15:33

You’ll know exactly what happened and when, without digging through CloudWatch logs.

Takeaway

The general pattern here is older than AWS: persist events before acting on them, and reconcile state from the event log. It’s event sourcing applied to a very specific problem, and it’s the simplest version of it – just one table, one query at registration time, and zero retry logic.

If you’re building an AWS Marketplace SaaS integration, save yourself the debugging session. Add the subscription_events table from day one.


I’ve packaged the production code behind this (and all the other AWS Marketplace plumbing – ResolveCustomer, auth, entitlements, metering) into a self-hosted Node.js gateway kit. If you’re listing a SaaS product on AWS Marketplace and don’t want to rebuild this from scratch, check it out here.

Posted in Computers, programming, Software | Tagged | Leave a comment

AWS Marketplace Jumpstart Kit

Are you building or listing a SaaS product on AWS Marketplace?

A pattern I’ve seen repeatedly: you end up rebuilding the same plumbing every time — customer onboarding, authentication, entitlement/subscription gating, and metering.

So I’m packaging my production code into a Node.js AWS Marketplace Authentication Gateway + Metering Kit (2-in-1). This is the same code I’ve used in production to ship 4 AWS Marketplace SaaS products.

What it is

  • Self-hosted Node.js AWS Marketplace Authentication Gateway + Metering Kit (PAYG + Contract)

Who it’s for

  • Teams building SaaS listings on AWS Marketplace who don’t want to rebuild ResolveCustomer/fulfillment, entitlement checks, subscription state, and metering semantics. This is one thing you don’t want to get wrong.

What it does

  • ResolveCustomer + fulfillment onboarding
  • Org admin panel (add /remove users)
  • Gateway routing (authenticate incoming requests and forward to your upstream)
  • Entitlement + PAYG subscription gating
  • Metering endpoint (aggregation/dedupe/hourly semantics; monthly credits)

What it does NOT do

  • SSO/OIDC (optional add-on)
  • Stripe billing (AWS Marketplace-only)

How it’s delivered

  • Private repo access + source included
  • Runs in your VPC; no required third-party SaaS

Pricing

  • $999 includes 12 months of updates
  • White-glove installation and support available for extra

Get Started

Posted in Computers, programming, Software | Tagged | Leave a comment

Monitoring AWS Costs

To view your cost breakdown goto Billing and Cost Management -> Cost Explorer and under Group By select Usage Type

Selecting Usage Type will provide you with more granular detail e.g., it will show what exactly in EC2 - Other is taking up costs. Most of the time these are EBS volumes. https://repost.aws/knowledge-center/ebs-charge-stopped-instance
Amazon EBS snapshots are billed at a lower rate than active EBS volumes. You can minimize your Amazon EBS charges but still retain the information that’s stored in Amazon EBS for later use. To do this, create a snapshot of the volume as a backup, and then delete the active volume. Later, when you need the information from the snapshot, use the snapshot to replace the EBS volume for use with your infrastructure.

Posted in Computers, programming, Software | Tagged | Leave a comment

Stay Off the Grid: Routing Internal Traffic via Route 53 Private Hosted Zones

In modern cloud architecture, “upstream” services constantly need to talk to “downstream” APIs. For our AI Interviewer application, we recently faced a challenge: how to securely and efficiently call the metering endpoint on our authentication gateway that handles billing.

While the solution might seem straightforward, the path to getting it right involved avoiding some common networking pitfalls.


The Dilemma: Public Latency vs. Private Complexity

When connecting two services within AWS, you generally have two “obvious” but flawed choices:

Option 1: The Public Route

You call the endpoint using its public URL (e.g., https://meter.example.com/bill).

  • The Problem: Traffic leaves the AWS backbone and traverses the public internet unnecessarily. Furthermore, if a developer accidentally misses the https protocol, sensitive API keys could be leaked over the wire.

Option 2: The Direct IP Route

You whitelist the upstream Security Group and call the instance directly via its private IP (e.g., http://x.y.z.w:PORT/bill).

  • The Problem: This is brittle. It requires both services to be in the same VPC, and it forces the application to bind to 0.0.0.0 rather than 127.0.0.1. This weakens our security posture by bypassing NGINX, which usually acts as our protective gatekeeper.

The Elegant Middle Ground: Private Hosted Zones

We wanted the best of both worlds: the clean, domain-based approach of Option 1, but the security and speed of Option 2. The solution is an AWS Route 53 Private Hosted Zone (PHZ).

A PHZ acts as an internal DNS server that only exists within your specified VPC. When your application looks up meter.example.com, Route 53 returns a private IP instead of a public one.


Step-by-Step Implementation

1. Create the Private Hosted Zone

First, we tell Route 53 to manage the domain internally for our specific VPC.

aws route53 create-hosted-zone \
    --name meter.example.com \
    --vpc VPCRegion=us-west-2,VPCId=vpc-xxx \
    --caller-reference $(date +%s) \
    --hosted-zone-config Comment="Internal API routing",PrivateZone=true

2. Map the Domain to a Private IP

Next, we create an “A Record” that points our domain name to the internal private IP of our gateway server.

aws route53 change-resource-record-sets \
    --hosted-zone-id /hostedzone/Z01165483IVUF0VMOL2KT \
    --change-batch '{
        "Changes": [{
            "Action": "CREATE",
            "ResourceRecordSet": {
                "Name": "meter.example.com",
                "Type": "A",
                "TTL": 300,
                "ResourceRecords": [{"Value": "x.y.z.w"}]
            }
        }]
    }'


The Result: Split-Horizon DNS in Action

Now, when our upstream AI Interviewer performs a DNS lookup, it no longer sees the public internet. It stays entirely within the AWS fabric.

Verification via nslookup:

$ nslookup meter.example.com

Non-authoritative answer:
Name:    meter.example.com
Address: x.y.z.w  &lt;-- Our Internal IP!

Why This Matters

  • Zero Latency: Traffic never leaves the data center.
  • Security by Default: Even if we use standard protocols, the data stays on a private network.
  • Maintainability: We can keep our NGINX configuration exactly as it is, maintaining a consistent security posture across public and private requests.

By leveraging Private Hosted Zones, we’ve built a system that is as easy to use as a public API but as secure as a private LAN.


Posted in Computers, programming, Software | Tagged , , , , | Leave a comment

Comparison of different ways to connect to EC2 instance

i am confused with so many access methods provided by aws – native ssh, ec2 instance connect, session manager and now serial console – that’s 4. why doesn’t it provide 1 recommended and best method to connect to ec2 instances?

Yeah, AWS really did a “collect them all” with EC2 access methods 😅
The good news: they each exist for a different primary use case. You don’t need to use all four.

Let me map them out in plain English and then I’ll give you a recommendation for one main way to use going forward.


The 4 ways in one sentence each

  1. Native SSH
    • Old-school: ssh ubuntu@ip.
    • Uses your key pair, Security Groups, public IP / bastion.
    • Simple, familiar, but tied to network exposure and key management.
  2. EC2 Instance Connect (browser / CLI)
    • Still SSH under the hood, but AWS temporarily injects an SSH key for you.
    • Good for ad-hoc admin, less key-juggling, still needs port 22 open.
  3. Session Manager (SSM)
    • No SSH, no open ports, no public IP needed.
    • Uses SSM agent + IAM. You connect via AWS console / CLI.
    • Great for locked-down prod instances and auditing.
  4. EC2 Serial Console
    • Like plugging a keyboard+monitor physically into the box.
    • Works even when SSH/network/fw is broken.
    • Pure break-glass / recovery tool, not for daily use.

Why AWS doesn’t say “just use this one”

Because different customers have very different priorities:

  • Old existing systems → expect classic SSH.
  • Enterprises / regulated environments → want no inbound ports, full audit → prefer Session Manager.
  • People who occasionally need OS-level recovery → need Serial Console.
  • People who like SSH but hate key distribution → Instance Connect is a compromise.

AWS can’t kill SSH (too widely used), but also can’t tell banks “yeah just keep 22 open to the internet”. So you end up with a menu instead of a single “best” choice.


How you can think about it (practical mental model)

For a startup / solo dev like you, a sane mental model is:

1. Pick ONE as your “everyday access”

If you’re comfortable with AWS-y stuff and IAM:

Use Session Manager as your primary.

  • No open SSH port
  • Works even if public IP changes
  • Auditable and more secure by default

If you just want something familiar and fast:

Use native SSH as your primary, but:

  • Lock Security Group to your IP (not 0.0.0.0/0 if possible)
  • Use good key hygiene
  • Maybe also enable SSM for backup

2. Keep ONE “break glass” method

  • That should be EC2 Serial Console (plus SSM if available).
  • You only touch it when:
    • sshd is broken,
    • firewall rules are wrong,
    • you’ve locked yourself out.

3. Treat EC2 Instance Connect as optional sugar

  • It’s just a nicer way to do SSH without juggling keys.
  • You don’t need it if you’re happy with SSH or SSM.

Concrete recommendation tailored to you

Given how deep you already are with AWS and how this SSH lockout just bit you:

If I were in your shoes, I’d do this going forward:

  1. Enable SSM + Session Manager on all EC2s
    • Attach an IAM role with AmazonSSMManagedInstanceCore.
    • Make sure SSM agent is installed (it is by default on most Amazon Linux / Ubuntu AMIs).
    • Use Session Manager as your “I just need a shell to debug/patch something” tool.
  2. Keep SSH, but treat it as a convenience
    • Keep port 22 open only from your current IP (or use a small bastion).
    • Use it for git push/port forwarding/etc. when it’s working.
    • But know that when SSH breaks, you have SSM + Serial Console.
  3. Enable EC2 Serial Console as the last line of defense
    • Give access only to your own admin role (or “break-glass” role).
    • Use it only when:
      • SSM isn’t set up / not working, and
      • SSH is dead, and you don’t want to detach volumes.

That way you’re not “choosing between four things” every time. It becomes:

  • Daily shell: Session Manager (or SSH if you really prefer).
  • Emergency: Serial Console.
  • Instance Connect: nice-to-have, not required.

The issue you’re experiencing with EC2 Instance Connect is related to the default session timeout. EC2 Instance Connect sessions typically last for one hour, after which you’re automatically disconnected even if you’re actively using the session.

Unfortunately, this one-hour session limit for EC2 Instance Connect is a fixed value and cannot be extended. This is different from the idle timeout that can be configured in some other AWS services.

Controlling the SSM session duration

aws ssm update-session-manager-settings --idle-session-timeout <minutes> --max-session-duration <minutes>

You can configure these settings in the Session Manager preferences within the AWS Systems Manager console or by using the AWS CLI. 

Using the AWS Console

  1. Sign in to the AWS Systems Manager console.
  2. In the left navigation pane, choose Session Manager.
  3. Choose the Preferences tab.
  4. Choose Edit and modify the values for Idle session timeout and/or Maximum session duration.
  5. Choose Save changes to apply the new settings. 

Using SSH+SSM to connect to VM

add this to ~/.ssh/config:

Host king-cobra
      HostName i-xxx
      User ubuntu
      IdentityFile ~/.ssh/id_ed25519
      ProxyCommand aws ssm start-session --target %h --region us-west-2 --document-name AWS-StartSSHSession --parameters portNumber=%p

Some more tips on SSM

Run these commands on your local computer (not EC2) if you see permissions denied errors. These commands create the logs directory where SSM stores its logs and gives it permissions over the directory. See this for more. You also need to create /usr/local/sessionmanagerplugin/seelog.xml file.

sudo mkdir -p /usr/local/sessionmanagerplugin/logs
sudo chown $USER:$USER /usr/local/sessionmanagerplugin/logs

Command to connect to EC2 using SSM:

aws ssm start-session --target i-xxx --region us-west-2 --document-name AWS-StartInteractiveCommand --parameters command="sudo su - ubuntu"

The command="sudo su - ubuntu" will log in as ubuntu. By default SSM will log you in as ssm-user which may not be very helpful.

Posted in Software, Computers, programming | Tagged , , , , | Leave a comment

Convert Word to Markdown

pandoc input.docx -o output.md --extract-media=./images
Posted in Computers, programming, Software | Tagged , , | Leave a comment

Steps to test AWS MP Integration

  • Create a separate buyer/test AWS account in your AWS Organization
  • Grant yourself access to it (IAM Identity Center: create/admin-assign a permission set like AdministratorAccess to your user/group for that account).
  • In AWS Marketplace Management Portal (seller account), add the test account ID to the product’s Limited visibility allowlist. This is the magic step. It will allow you to subscribe to the product in a test environment before its visibility is updated to public and thus simulate the flow when a real customer subscribes to your product.
  • From the test account, open the listing (direct URL if needed) → Subscribe → complete the redirect to your /register endpoint → verify ResolveCustomer + entitlements flow.
  • Before/after going Public:
    • Either cancel the test subscription, or
    • In your app, maintain a do-not-meter customer/account list (skip MeterUsage / metering events for that customer), or
    • Create a $0 private offer for the test account (best for ongoing testing on a Public listing).

https://docs.aws.amazon.com/marketplace/latest/userguide/metering-for-usage.html

  • Even if there is no usage to report, you can continue sending metering records every hour and record a quantity of 0 if there is no usage to report for that hour.
  • During publishing, the AWS Marketplace Operations team will test that the SaaS application sends the metering record successfully before allowing the product to be published. Typically, the team will perform a mock sign up of the SaaS and confirm that a metering record is received.
  • If this is a SaaS with the pricing model “Subscription” (not pricing models “Contract” or “Contract with Consumption“), then the buyer can unsubscribe at any time. The other two pricing models have a set duration based on the time of subscription and the buyer cannot unsubscribe during it. They can only turn off autorenewal.
  • Please note that pricing model change is not supported for SaaS products. [1]
  • We deduplicate metering requests on the hour.
  • Requests are deduplicated per product/customer/hour/dimension. i.e., if all 4 are the same the externally metered quantity is not aggregated.
  • You can always retry any request, but if you meter for a different quantity, the original quantity is billed.
  • If you send multiple requests for the same customer/dimension/hour, the records are not aggregated

What really distinguishes the 3 types of SaaS Products – subscription, contract, contract + consumption?

A SaaS product on AWS MP has one or more pricing dimensions associated with it. A pricing dimension can be of two types – Entitled or ExternallyMetered. ExternallyMetered dimensions have to be manually billed by the seller using the BatchMeterUsage endpoint. Think of an electricity meter. Entitled dimensions are billed via contracts – monthly or yearly. AWS automatically takes care of ongoing billing and provides an endpoint that you can call to check if a customer has an entitlement. Think of a Netflix subscription. You can offer multiple entitlements and they can be mutually exclusive but don’t have to. What an entitlement buys a customer is completely up to you and your internal detail. I think of entitlements as tiers – e.g., basic, pro, enterprise versions of the product.

Definitions:

  • A pure subscription or pay-as-you-go (PAYG) product only contains ExternallyMetered pricing dimensions. Customer gets variable bill per month (just like your electricity bill).
  • A pure contract product only contains Entitled pricing dimensions. Customer gets a flat bill per month.
  • A contract + consumption product contains at least one Entitled and at least one ExternallyMetered dimension

Even if you are developing a pure PAYG product (again AWS calls this a SaaS subscription), you might want to list it as a Contract + Consumption when creating the listing on AWS MP Seller Console. Why? For one, you future-proof it if you decide to change the pricing model later on. What I mean here is that you still keep the billing model as contract + consumption but can now simply update the price of the entitlement from zero to non-zero. Secondly, for the contract and contract + consumption models you can call the Entitlement service to check if the customer has an active contract. There is no such service available for PAYG (SaaS subscription). You must maintain the customer’s subscription status in your own database and remember to update it if the customer stops subscribing to your product. What are your thoughts? You can create an entitlement with $0/month or just $1/month and think of that entitlement as providing access to your platform. In short, a contract + consumption can be configured to mimic a pure PAYG or pure contract product but the reverse is not possible.

More

Posted in Computers, Software | Tagged | Leave a comment

Useful Windows Commands

Get all the system info

systeminfo

Get RAM details

 Get-CimInstance Win32_PhysicalMemory | Select-Object DeviceLocator, Manufacturer, @{Name="Capacity(GB)"; Expression={$_.Capacity / 1GB}}, ConfiguredClockSpeed

DeviceLocator  Manufacturer Capacity(GB) ConfiguredClockSpeed
-------------  ------------ ------------ --------------------
ChannelB-DIMM0 859B                   16                 2400

Get total expandable RAM

Get-CimInstance Win32_PhysicalMemoryArray | Select-Object MaxCapacity, MemoryDevices

MaxCapacity MemoryDevices
----------- -------------
   33554432             2

Get SSD info

Get-PhysicalDisk | Select-Object FriendlyName, MediaType, HealthStatus, Size

FriendlyName      MediaType HealthStatus         Size
------------      --------- ------------         ----
KINGSTON SNVS500G SSD       Healthy      500107862016

Get OS info

 Get-ComputerInfo | Select-Object OSName, OSVersion, OSDisplayVersion, OSBuildNumber

OsName                   OsVersion  OSDisplayVersion OsBuildNumber
------                   ---------  ---------------- -------------
Microsoft Windows 11 Pro 10.0.26100 24H2             26100

Get motherboard info

Get-CimInstance -ClassName Win32_BaseBoard | Select-Object Manufacturer, Product, SerialNumber, Version

Manufacturer Product SerialNumber      Version
------------ ------- ------------      -------
AZW          SEi     CB1D27211C14S0696 Type2 - Board Version

Get CPU info

 Get-CimInstance Win32_Processor | Select-Object Name, NumberOfCores, NumberOfLogicalProcessors, MaxClockSpeed

Name                                     NumberOfCores NumberOfLogicalProcessors MaxClockSpeed
----                                     ------------- ------------------------- -------------
Intel(R) Core(TM) i5-8279U CPU @ 2.40GHz             4                         8          2400

Get BIOS

Get-CimInstance Win32_BIOS | Select-Object Manufacturer, SMBIOSBIOSVersion, ReleaseDate

Manufacturer SMBIOSBIOSVersion ReleaseDate
------------ ----------------- -----------
INSYDE Corp. CB1D_FV106        8/24/2021 5:00:00 PM

Get networking info

 Get-NetIPAddress -AddressFamily IPv4 | Select-Object InterfaceAlias, IPAddress

List installed software

Get-ItemProperty HKLM:\Software\Microsoft\Windows\CurrentVersion\Uninstall\* | Select-Object DisplayName, DisplayVersion, Publisher | Sort-Object DisplayName

Tip: Do not install wmic. It is deprecated. Use Powershell

Posted in Computers, programming | Tagged | Leave a comment

10 Steps to secure your home network

I compiled most of this checklist thanks to ChatGPT. Log in to the router dashboard (typically 192.168.0.1) and from there verify:

1. No Virtual Servers

Forwarding -> Virtual Servers

2. DMZ is disabled

Forwarding -> DMZ

3. No Port Triggering

Forwarding -> Port Triggering

4. SPI Firewall is Enabled

Security -> Basic Security

5. UPnP is disabled

Forwarding -> UPnP

6. Remote Management is off

Security -> Remote Management

7. Disable WPS

WPS

8. Use WPA2-AES or WPA3, strong Wi-Fi password

Wireless -> Wireless Security

9. Set Network Profile in Windows to Public

Under Network and Internet

10. Get your router’s public IP address and do a port scan from a VM outside your network

You can get your router’s public IP address from the router admin dashboard or from Powershell:

 (Invoke-WebRequest -UseBasicParsing "https://api.ipify.org").Content

Now do a port scan from a computer outside your network to see if there are any open (exposed) ports:

$ sudo nmap -Pn -sS -T3 --top-ports 1000 --reason $HOME_PUBLIC_IP

You want to see output like:

All 1000 scanned ports on c-xxx.hsd1.wa.comcast.net (xxx) are in ignored states.
Not shown: 1000 filtered tcp ports (no-response)

Bonus: scan UDP ports:

$ sudo nmap -Pn -sU -T3 --reason p 53,67,68,69,123,161,500,1900,5353,11211 $HOME_PUBLIC_IP

You want to see:

PORT      STATE         SERVICE  REASON
53/udp    open|filtered domain   no-response
67/udp    open|filtered dhcps    no-response
68/udp    open|filtered dhcpc    no-response
69/udp    open|filtered tftp     no-response
123/udp   open|filtered ntp      no-response
161/udp   open|filtered snmp     no-response
500/udp   open|filtered isakmp   no-response
1900/udp  open|filtered upnp     no-response
5353/udp  open|filtered zeroconf no-response
11211/udp open|filtered memcache no-response

Bonus Commands

Get your IPv6 address:

 ipconfig | findstr /i "IPv6"

If this only displays a Link-local IPv6 Address starting with fe80 you don’t have a IPv6 address.

List your network interfaces:

Get-NetAdapterBinding -ComponentID ms_tcpip6 | Format-Table Name,Enabled -AutoSize

Name                               Enabled
----                               -------
Wi-Fi                                 True
Bluetooth Network Connection          True
Ethernet                              True
Ethernet 2                            True
vEthernet (WSL (Hyper-V firewall))    True

Block Malware and Adult Content

Under DHCP settings (and WAN) change primary and secondary DNS to

Refer this. Run ipconfig /all (Windows) and verify:

Use this with caution as it can block legit websites:

>nslookup sidstick.com
Server:  family.cloudflare-dns.com
Address:  1.1.1.3

Non-authoritative answer:
Name:    sidstick.com
Addresses:  ::
          0.0.0.0

If I change to Google nameservers

nslookup sidstick.com 8.8.8.8
Server:  dns.google
Address:  8.8.8.8

Non-authoritative answer:
Name:    sidstick.com
Address:  35.215.78.203

Rebooting the device

Click the Reboot button under System Tools to reboot this device.

Some settings of this device will take effect only after rebooting, which include:

  • Change the LAN IP Address (system will reboot automatically).
  • Change the DHCP Settings.
  • Change the Web Management Port.
  • Upgrade the firmware of this device (system will reboot automatically).
  • Restore this device’s settings to the factory defaults (system will reboot automatically).
  • Update the configuration with the file (system will reboot automatically).
Posted in Computers | Leave a comment

What AI tool was used to create each of the websites below

and which one is your favorite?

Posted in Uncategorized | Leave a comment