Sunday, December 29, 2024

what are the most critical 18 CIS security controls ?

The 18 CIS (Center for Internet Security) Critical Security Controls are a prioritized set of best practices designed to strengthen an organization's cybersecurity posture. These controls are divided into three categories: BasicFoundational, and Organizational. Below is an overview of the controls:

Basic Controls (1–6):

These are foundational and should be implemented first to establish a strong security baseline:

  1. Inventory and Control of Enterprise Assets: Maintain an accurate inventory of all enterprise hardware assets to identify unauthorized devices.
  2. Inventory and Control of Software Assets: Track all software to prevent unauthorized or vulnerable applications.
  3. Data Protection: Protect sensitive data through encryption, access control, and monitoring.
  4. Secure Configuration of Enterprise Assets and Software: Ensure systems and software are securely configured to minimize vulnerabilities.
  5. Account Management: Manage user accounts, including creating, monitoring, and revoking access as needed.
  6. Access Control Management: Limit access rights based on job roles, enforce multi-factor authentication, and monitor access.

Foundational Controls (7–16):

These build upon the Basic controls to provide more detailed security measures:

  1. Continuous Vulnerability Management: Regularly identify and address vulnerabilities in systems and software.
  2. Audit Log Management: Collect, review, and retain logs to detect and analyze security incidents.
  3. Email and Web Browser Protections: Implement safeguards against phishing, malicious links, and other email/browser-based threats.
  4. Malware Defenses: Deploy tools to detect, prevent, and respond to malware infections.
  5. Data Recovery: Develop robust backup and recovery processes to ensure data restoration after incidents.
  6. Network Infrastructure Management: Securely manage network devices and configurations to reduce risks.
  7. Network Monitoring and Defense: Use tools to monitor network traffic for suspicious activity and defend against threats.
  8. Security Awareness and Skills Training: Educate employees on cybersecurity risks and best practices through regular training.
  9. Service Provider Management: Evaluate third-party providers for secure handling of sensitive data.
  10. Application Software Security: Assess and mitigate vulnerabilities in software developed or used by the organization.

Organizational Controls (17–18):

These focus on broader organizational practices for maintaining security:

  1. Incident Response Management: Create a plan to detect, respond to, and recover from security incidents effectively.
  2. Penetration Testing: Conduct regular penetration tests to identify weaknesses in systems, processes, or personnel.

These controls are designed to be flexible and applicable across various industries, helping organizations prioritize cybersecurity efforts based on their risk profiles[1][2][3].

Citations: [1] https://dev.to/awais_684/implement-cis-top-18-controls-in-your-organization-1j69 [2] https://hyperproof.io/cis-security-controls/ [3] https://www.kiteworks.com/risk-compliance-glossary/cis-controls-v8/ [4] https://www.impactmybiz.com/blog/cisv8-critical-security-controls/ [5] https://blog.netwrix.com/2022/09/16/top-cis-critical-security-controls-for-cyber-defense/ [6] https://www.securitymetrics.com/blog/whats-changed-cis-controls-v8 [7] https://embed-ssl.wistia.com/deliveries/3712ff62cfac074571eb6db3f089be0ca0e0a09c.webp?image_crop_resized=1280x720&sa=X&ved=2ahUKEwjBy_OH_c2KAxWQmYkEHaYnBVkQ_B16BAgEEAI[8] https://www.cisecurity.org/controls

Friday, December 13, 2024

debugging web application firewall errors

 The goal is to understand what is causing the request to fail on azure WAF. Example log :


AzureDiagnostics

| where ResourceProvider == "MICROSOFT.NETWORK"

| where Category == "ApplicationGatewayFirewallLog"

| where action_s == "Matched"

| project details_message_s, details_data_s


This will give and a 

details_message_s: Pattern match (?:\$(?:\((?:\(.*\)|.*)\)|\{.*\})|[<>]\(.*\)) at ARGS.

and some details_data_s


Copy those values to https://regex101.com/ to find where is fails.



https://techcommunity.microsoft.com/blog/azurenetworksecurityblog/azure-waf-tuning-for-web-applications/3776133

Friday, December 6, 2024

creating images in ACR

 $CONTAINER_IMAGE_NAME="your_image_name:0.1"

$CONTAINER_REGISTRY_NAME = "your_registry_name"

 

az login --tenant "XXXXXXXXXXXXXXXXXX"

az account set --name azurecloud --subscription "XXXXXXXXXXXXXXXXXXXXXXXX"

 

#This will pick local files

az acr build --registry "$CONTAINER_REGISTRY_NAME" --image "$CONTAINER_IMAGE_NAME" --file "Dockerfile" .


#This will pick remote files

az acr build --registry "$CONTAINER_REGISTRY_NAME" --image "$CONTAINER_IMAGE_NAME" --file "Dockerfile.azure-pipelines" "https://github.com/poorleno1/container-apps-ci-cd-runner-tutorial.git"




various

az acr task create --registry "$CONTAINER_REGISTRY_NAME" --name updateimage --context https://github.com/poorleno1/container-apps-ci-cd-runner-tutorial.git --file Dockerfile.azure-pipelines --image "$CONTAINER_IMAGE_NAME" --commit-trigger-enabled false


--commit-trigger-enabled

Indicates whether the source control commit trigger is enabled.

Thursday, December 5, 2024

Assign permissions to enterprise app using powershell

You might be required to add this storage account to Directory Reader role 



#find-module Microsoft.Graph.Authentication | install-module

Disconnect-Graph
Get-MgContext
Connect-MgGraph -Scopes "Application.Read.All","AppRoleAssignment.ReadWrite.All,RoleManagement.ReadWrite.Directory" -TenantId "XXXXXXXXXXXXXXXXXXXXXXXXX"

Select-MgProfile Beta


$MdId_Name = "ManagementAutomation"

$MdId_ID = (Get-MgServicePrincipal -Filter "displayName eq '$MdId_Name'").id

$graphApp = Get-MgServicePrincipal -Filter "AppId eq '00000003-0000-0000-c000-000000000000'"

$graphScopes = @(
    "User.Read.All"
    "Mail.Send"
    "Mail.ReadWrite"
)


ForEach($scope in $graphScopes){
 
  $appRole = $graphApp.AppRoles | Where-Object {$_.Value -eq $scope}
 
  if ($null -eq $appRole) { Write-Warning "Unable to find App Role for scope $scope"; continue; }
 
 
 
   #Check if permissions isn't already assigned
  $assignedAppRole = Get-MgServicePrincipalAppRoleAssignment -ServicePrincipalId $MdId_ID | Where-Object { $_.AppRoleId -eq $appRole.Id -and $_.ResourceDisplayName -eq "Microsoft Graph" }
 
 
 
  if ($null -eq $assignedAppRole) {
    New-MgServicePrincipalAppRoleAssignment -PrincipalId $MdId_ID -ServicePrincipalId $MdId_ID -ResourceId $graphApp.Id -AppRoleId $appRole.Id
  }else{
    write-host "Scope $scope already assigned"
  }
}



Wednesday, December 4, 2024

Assign administrator roles with PowerShell

 General approach is that you need to get RoleID and then assign enterprise app object ID to this RoleID:


Create an app using CLI:


$app_name = "Deployment app"

$app = az ad app create --display-name $app_name --query '{appId: appId, objectId: id}' --output json

$app = $app | ConvertFrom-Json

$cred = az ad app credential reset --id $app.appId --display-name "client-secret" --years 2

$enapp = az ad sp create --id  $app.appId --query '{appId: appId, objectId: objectId}' --output json 

$enappID = az ad sp show --id  $app.appId --query id --output tsv


Assign it to a role:


$AdminRoleObject = Get-AzureADDirectoryRole| where {$_.DisplayName -eq 'Application Administrator'} 

Add-AzureADDirectoryRoleMember -ObjectId $AdminRoleObject.ObjectId -RefObjectId $enappID


If RoleID do not exist ($AdminRoleObject is empty) enable it:

$template = Get-AzureADDirectoryRoleTemplate | where {$_.DisplayName -eq 'Privileged Role Administrator'} 

Enable-AzureADDirectoryRole -RoleTemplateId $template.ObjectId



Other, assign owner to subscription:

az role assignment create --assignee $app.appId --role "Owner" --scope "/subscriptions/$subscriptionID"

Tuesday, December 3, 2024

list open ports

 lsof -nP -iTCP -sTCP:LISTEN


ss -tunlp

netstat -tnlp


apt-get install procps
apt install net-tools
apt install iproute2 net-tools procps



#!/bin/bash # This script lists processes with open TCP ports by reading /proc/net/tcp and # matching socket inodes to file descriptors in /proc/[pid]/fd directories. # Function to convert hexadecimal port number to decimal. convert_port() { local hex_port=$1 echo $((16#$hex_port)) } echo "Processes with open TCP ports (based on /proc):" printf "%-8s %-20s %-6s\n" "PID" "Process Name" "Port" echo "-------------------------------------------" # Skip the header line from /proc/net/tcp by using tail. tail -n +2 /proc/net/tcp | while read -r line; do # Extract the local address (field 2) and the socket inode (field 10). local_address=$(echo "$line" | awk '{print $2}') inode=$(echo "$line" | awk '{print $10}') # If inode is empty, skip this line. if [[ -z "$inode" ]]; then continue fi # Extract the port (in hex) from the local_address (format: IP:PORT). port_hex=$(echo "$local_address" | cut -d':' -f2) port=$(convert_port "$port_hex") # Use find to look for file descriptors linking to this socket inode. pids=$(find /proc/[0-9]*/fd -lname "socket:\[$inode\]" 2>/dev/null | \ cut -d'/' -f3 | sort -u) # For each matching process, retrieve the process name. for pid in $pids; do if [ -f "/proc/$pid/comm" ]; then pname=$(cat /proc/$pid/comm) else pname="N/A" fi printf "%-8s %-20s %-6s\n" "$pid" "$pname" "$port" done done



List all processes:

#!/bin/bash
# This script lists all processes by scanning the /proc filesystem.

# Print header
printf "%-8s %-s\n" "PID" "Process Name"
printf "%-8s %-s\n" "--------" "----------------"

# Loop over directories in /proc that are numerical
for pid_dir in /proc/[0-9]*; do
    pid=$(basename "$pid_dir")
    
    # Check for the existence of the comm file which contains the process name
    if [ -f "$pid_dir/comm" ]; then
        proc_name=$(cat "$pid_dir/comm")
    else
        proc_name="N/A"
    fi
    
    printf "%-8s %-s\n" "$pid" "$proc_name"
done

Monday, October 28, 2024

Publish CRLs

1. Login to offline RootCA and create a new crl file:

    certutil –crl

2. Copy CRL file from C:\Windows\System32\Certsrv\CertEnroll\ to a USB

3. on Issuing servers upload crl file to C:\inetpub\wwwroot\pki and other locations that CRL should be uploaded to like share or AD.

Publish in AD with: certutil –dspublish -f C:\CRKRoot.crl




Some Kusto queries

 1. Find resource using TLS lower than 1.2:

resources
where type in (
    'microsoft.web/sites/config',
    'microsoft.storage/storageaccounts',
    'microsoft.sql/servers',
    'microsoft.network/applicationgateways',
    'microsoft.cdn/profiles/endpoints',
    'microsoft.apimanagement/service',
    'microsoft.network/virtualnetworkgateways',
    'microsoft.signalrservice/signalr',
    'microsoft.servicebus/namespaces',
    'microsoft.containerservice/managedclusters'
)
extend TlsVersion = case(
    type == 'microsoft.web/sites/config', properties.minTlsVersion,
    type == 'microsoft.storage/storageaccounts', properties.minimumTlsVersion,
    type == 'microsoft.sql/servers', properties.minimalTlsVersion,
    type == 'microsoft.network/applicationgateways', properties.sslPolicy.minProtocolVersion,
    type == 'microsoft.cdn/profiles/endpoints', properties.tlsSettings.protocolType,
    type == 'microsoft.apimanagement/service', tostring(properties.protocols),
    type == 'microsoft.network/virtualnetworkgateways', tostring(properties.vpnClientConfiguration.vpnClientProtocols),
    type == 'microsoft.signalrservice/signalr', properties.tls.minimalTlsVersion,
    type == 'microsoft.servicebus/namespaces', properties.minimumTlsVersion,
    type == 'microsoft.containerservice/managedclusters''TLS managed by individual deployments',
    'Unknown')
where TlsVersion !contains "1.2" and TlsVersion != "Unknown" and TlsVersion != "TLS1_2"
project ResourceType = type, 
          ResourceName = name, 
          Location = location, 
          TlsVersion


2.Find blocked queried in app gateway

AzureDiagnostics
| where ResourceProvider == "MICROSOFT.NETWORK"
| where Category == "ApplicationGatewayFirewallLog"
| where action_s == "Matched"
| project
    TimeGenerated,
    ClientIP = clientIp_s,
    RequestURI = requestUri_s,
    RuleId = ruleId_s,
    RuleSetType = ruleSetType_s,
    Action = action_s,
    Message,
    Hostname = hostname_s,
    TransactionId = transactionId_g
| sort by TimeGenerated desc
 

3. Find timeouts:


AzureDiagnostics

| where Category == "ApplicationGatewayAccessLog"

| where httpStatus_d in (408, 504, 502)  // Common timeout-related HTTP status codes

| where host_s == "ylukscaleprod.eu.yusen-logistics.com"


4. Statistics, success rate in every 5 minute slot:


AzureDiagnostics

| where ResourceType == "APPLICATIONGATEWAYS"

| where Category == "ApplicationGatewayFirewallLog" or Category == "ApplicationGatewayAccessLog"

| where TimeGenerated >= ago(30d) // Adjust timeframe as needed

| where listenerName_s == "https-ylukscaleprod-eu-yusen-logisitcs-com" // Filter for specific listener if needed

| extend ListenerName = listenerName_s

| extend ResponseCode = httpStatus_d

| extend IsHealthy = iff(ResponseCode >= 200 and ResponseCode < 400, true, false)

| summarize 

    TotalRequests = count(),

    FailedRequests = countif(not(IsHealthy)),

    SuccessRate = (count() - countif(not(IsHealthy))) * 100.0 / count()

    by bin(TimeGenerated, 5m), ListenerName, _ResourceId

| extend IsDown = iff(SuccessRate < 50, true, false) // Define downtime threshold

| order by TimeGenerated desc



5. Success rate in last 7 dates:


AzureDiagnostics

| where ResourceType == "APPLICATIONGATEWAYS"

| where Category == "ApplicationGatewayFirewallLog" or Category == "ApplicationGatewayAccessLog"

| where TimeGenerated >= ago(30d) // Adjust timeframe as needed

| where listenerName_s == "https-ylukscaleprod-eu-yusen-logisitcs-com" // Filter for specific listener if needed

| extend ListenerName = listenerName_s

| extend ResponseCode = httpStatus_d

| extend IsHealthy = iff(ResponseCode >= 200 and ResponseCode < 400, true, false)

| summarize 

    TotalRequests = count(),

    FailedRequests = countif(not(IsHealthy)),

    SuccessRate = (count() - countif(not(IsHealthy))) * 100.0 / count()

    by bin(TimeGenerated, 5m), ListenerName, _ResourceId

| extend IsDown = iff(SuccessRate < 50, true, false) // Define downtime threshold

| order by TimeGenerated desc



6. Statistics with error code failures and successes:

AzureDiagnostics

| where ResourceProvider == "MICROSOFT.NETWORK"

| where Category == "ApplicationGatewayFirewallLog"

| where action_s == "Matched"

| where hostname_s == "ylukscaleprod.eu.yusen-logistics.com"

| project

    TimeGenerated,

    ClientIP = clientIp_s,

    RequestURI = requestUri_s,

    RuleId = ruleId_s,

    RuleSetType = ruleSetType_s,

    Action = action_s,

    Message,

    Hostname = hostname_s,

    TransactionId = transactionId_g

| sort by TimeGenerated desc


Tuesday, October 15, 2024

Converting VM to generation 2 in Hyper-v

 1. Create a VM from vagrant, this is gen1.

2. Although disk is VHDX, export it, add more space in Hyper-V

3. Attach this drive to old VM and expand drive is disk management.

4. Convert to GPT using MBR2GPT.

    mbr2gpt.exe /validate /disk:1 /allowFullOS

    mbr2gpt.exe /convert /disk:1 /allowFullOS 

This will create a partition at the end.



5. Create a new VM (Gen2) and use exported drive.


Thursday, September 19, 2024

openssl basics

 1. check web site expiration: 

 

echo test | openssl s_client -connect google.com:443 | openssl x509 -noout -dates


2. PFX to PEM convertion (PFX do not have any pass):

 

openssl pkcs12 -in cert-in.pfx -out cert-out.pem -nodes 


3. copying with scp:


scp -i C:\Users\name\OneDrive\ssh\MyPrv.pem .\file.pem remote-host.com:\tmp


4. Logging to remote host with local port forward


az ssh vm --resource-group rg-poc --name ubuntu1 -- -L 1433:localhost:1433

5. Add a password to PFX file:

openssl pkcs12 -in  kv-msr-acme-letw.pfx -out c:\temp\temp1.pem -nodes

Press Enter.

.\openssl pkcs12 -export -in c:\temp\temp1.pem -out c:\temp\new_protected1.pfx -passout pass:strongpass


6. Convert cer file into pem;

openssl x509 -inform der -in .\RootCA.cer -out .\RootCA.pem


Wednesday, August 7, 2024

solving issues The following signatures were invalid: EXPKEYSIG ABF5BD827BD9BF62

 how to solve this :


The following signatures were invalid: EXPKEYSIG ABF5BD827BD9BF62 nginx signing key <signing-key@nginx.com>


1. vi /etc/apt/sources.list.d/nginx.list


Bold is a key used to check, it must be updated

deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] http://nginx.org/packages/ubuntu/ focal nginx
# deb-src http://nginx.org/packages/ubuntu/ focal nginx


Update key:


curl -s https://nginx.org/keys/nginx_signing.key | gpg --dearmor > /usr/share/keyrings/nginx-keyring.gpg

Saturday, March 9, 2024

mysql replication

 generally follow this procedure, it's working fine: https://learn.microsoft.com/en-us/azure/mysql/single-server/how-to-data-in-replication


install mydumper on a MySQL source server:

apt-get install mydumper


Turn on binary logging. In the mysqld section, add following line:

log-bin=mysql-bin.log

Restart the server



Set your DB to read only mode:

mysql -uUserName -pPassWord -DDatabaseName <<<"FLUSH TABLES WITH READ LOCK;"

mysql -uUserName -pPassWord -DDatabaseName <<<"SET GLOBAL read_only = 1;"

 

Check your master status, run this at the beginnig of the running the backup:

mysql -uUserName -pPassWord -DDatabaseName <<<"show master status;"

mysql: [Warning] Using a password on the command line interface can be insecure.

File    Position        Binlog_Do_DB    Binlog_Ignore_DB        Executed_Gtid_Set

mysql-bin.000084        522687808


Dump required databases:

mydumper --regex='^(?!(backup|percona|mysql|sys|information_schema|performance_schema))'  --host=localhost --user=UserName --password=PassWord --outputdir=backup --rows=500000 --compress --build-empty-files --threads=16 --compress-protocol --kill-long-queries --lock-all-tables -L mydumper-logs.txt


Check again after running the backup:

mysql -uUserName -pPassWord -DDatabaseName <<<"show master status;"


This should have a same value as previous.


When backup is finished unlock tables:

mysql -uUserName -pPassWord -DDatabaseName <<<"SET GLOBAL read_only = OFF;"

mysql -uUserName -pPassWord -DDatabaseName <<<"UNLOCK TABLES;"


Restore databases: 

myloader -h 'mysql.mysql.database.azure.com' --user=UserName --password=PassWord --directory=/var/lib/mysql/backup --queries-per-transaction=500 --threads=16 --compress-protocol --verbose=3 -e 2>myloader-logs.txt


Create a user on source server:

CREATE USER 'syncuser'@'%' IDENTIFIED BY 'yourpassword';

GRANT REPLICATION SLAVE ON *.* TO ' syncuser'@'%';


Setup synchronization on destination machine:


CALL mysql.az_replication_change_master('yourVmName.uksouth.cloudapp.azure.com', 'syncuser', 'yourpassword', 3306, 'mysql-bin.000084', 522687808, '');


Check status: 

show slave status;


Start synchronization:

CALL mysql.az_replication_start;


Troubleshooting:


Example error: Worker 1 failed executing transaction 'ANONYMOUS' at master log mysql-bin.000084, end_log_pos 526850979; Error executing row event: 'Table 'Table1' doesn't exist'


Solution: filter this Table1 on Azure portal:



Check for errors: 

select * from performance_schema.replication_applier_status_by_worker;



Tuesday, January 9, 2024

copy SSRS report

 1. Download https://raw.githubusercontent.com/microsoft/sql-server-samples/master/samples/features/reporting-services/ssrs-migration-rss/ssrs_migration.rss

2. execute C:\rss>"C:\Program Files\Microsoft SQL Server Reporting Services\Shared Tools\RS.exe" -i ssrs_migration.rss -e Mgmt2010 -s https://source/reportserver -v ts="http://destination/Reportserver"



Sunday, January 7, 2024

tips for securing environment

 1. Pass the hash mitigation:




All windows services have their own SIDs, that can be used to provide access to internal resources





Access tokens are viewed with "whoami /all", this consists of SID, group membership and priviliged (not matter if those are in disabled state)


Security descriptor is a lock (as opposed to access tokens), those define who has access to which resource, it cotains DACL (actual permision list) and SACL (auditing info) and ownership info.

DACL is managed by someone who have a "Full Controll" permissions, SACL can be managed only by someone who has a user privilige (usually means belonging to local admin group)


Rule sets are read from top down and inherted have lower priority than local:



Deny always wins if this set by a group as opposed to inherited permissions.

Priviliges always beat permissions.

Even if a domain admin set a DENY permission on "take ownership" permission for a local admin, because of this privilige "Take ownership of files and other objects"




Tip for connecting to a user session that is disconnected: start taskmgr as SYSTEM and connect, you will not be asked for a password.