Tuesday, 4 December 2018

Remote Debug of a Windows Server Core hosted application from VS 2017

If you have an application hosted in a Windows Server 2016 Core machine and would like to debug it directly from Visual Studio 2017, then its very simple to configure remote debugging.

First, you must download the Visual Studio Remote tools from this link. Save it somewhere on the machine disk, for example "C:\temp\vs_remotetools.exe".

Next, using the command-line as an administrator, invoke the following command:

C:\temp\vs_remotetools.exe /install /quiet


This will silently install the remote tools. Usually, I like to have the tools running as a service so I don't have to start those each time. This is very convenient specially for development environments.

Go to "C:\Program Files\Microsoft Visual Studio 15.0\Common7\IDE" and execute rdbgwiz.exe. Despite being a no-GUI edition, there are some applications that do show some user interface and fortunately this is one of those.

Click next and on the second screen be sure to select "Run the Visual Studio 2017 Remote Debugger service". You can then enter the credentials to use when running the service or you can leave the default "LocalSystem" account, if you prefer. After clicking next you'll have the change to select the type of networks to use for debugging and if all goes well you'll have a message on the last screen saying the service was configured correctly.

To ensure the service is running execute the following using PowerShell:

Get-Service -Name msvsmon150


Now that the remote is up and running, open your project with Visual Studio 2017, set the breakpoints you need and open the "Attach to Process" dialog (Debug > Attach to Process). Input the IP of the machine where you just installed the remote debugger and use port 4022 which is the default. After that click on Refresh and you should see the list of processes running on that machine, like the below figure:


Final step, as you may know, just select the process you want and click on Attach. Execute any necessary steps and the debugger should now stop in the breakpoints you've setup.

If you'd like to run the debugger tools "on-demand" instead of running as a service, then you can run msvsmon.exe each time that's needed, which by default will be located under "C:\Program Files\Microsoft Visual Studio 15.0\Common7\IDE\Remote Debugger\x64".

Hope this is useful to you. Feel free to comment. See you on a next post.

Thursday, 27 September 2018

Configuring custom endpoint for Azure Service Fabric applications

When creating clusters in Azure Service Fabric, one of the options you have when configuring the node types is to define the custom endpoints needed for your Service Fabric applications.



However, after creating the cluster, if you go back to the node type details you'll notice that there's no option to allow you to customize the endpoints, either by adding new ones or removing the ones configured initially when creating the cluster.



I've done some searching and found out that there are some people out there that didn't find out a solution for this and come up with various workarounds, one of them was to delete everything and recreate the cluster from scratch! That is never a good option obviously.

Let's try to avoid that kind of solutions and please notice that when you create a cluster one of the resources that gets configured is a load balancer. Is in that resource that you can configure the custom endpoint ports needed to reach your services from the outside world.

Go to the load balancer configuration and open the health probes section. In there you need to add a new health probe configured to the port you need for your endpoint, in this case 21000.


When Azure finishes the creation of the Health probe, you can go to the Load Balancing Rules section and create a new rule that will use that Health Probe and is configured to the same port, such as the example below.


And that's it. Your service is now exposed to the outside world via the chosen port.

Happy coding.


Thursday, 28 June 2018

Upgrading XRDP in Ubuntu 16.04

In the past week we were having some issues at work while trying to connect to some Ubuntu servers via RDP. Since we have Windows laptops, it’s very common for us to use this protocol to connect remotely to our linux machines so we always install XRDP on those servers every time we need to have a remote connection to a graphical interface.

Unfortunately the XRDP version that you can get from the standard apt-get command in Ubuntu 16.04 is really outdated and that causes the following to happen:
  • Crashes on some applications that require more graphical capabilities such as the Robot Operating System (ROS). In this case the graphical interfaces only worked if we were connecting directly to the virtual machine but not via XRDP;
  • Protocol errors when trying to RDP into the machines using the Microsoft Remote Desktop app that we have installed on our Surface Hub (yes, we have acquired one of those and it’s simply awesome!)
After some research we decided to find a way to update the XRDP package but we really didn’t want to go with all the trouble of getting the source code from the XRDP Github repo and compiling it ourselves. So our approach to solve this was to add a PPA to our machines and in this case we used one from hermlnx. So upgrade XRDP we just had to perform the following commands with sudo privileges:

sudo add-apt
sudo apt-get update
sudo apt-get upgrade xrdp


After the upgrade, all the above issues were gone and we noticed some other improvements, such as:
  • Everytime we now reconnect to those machines via XRDP, it recovers the already existing session instead of creating a new one, something that happened with the previous version. It was possible to workaround that but it required to change some XRDP settings and to remember the port used everytime we established a session so that we could use the same port when reconnecting;
  • The keyboard layout (we use en-GB here) is now automatically recognized instead of defaulting to en-US. There is also workarounds for that with the previous XRDP versions, but it required a lot of commands and tweaks on the XRDP keyboard maps.
So if you’re having the same issues, or even others, when establishing RDP connections to Ubuntu 16.04, I trully recommend to update XRDP before anything else. I’m certain that will help you a lot!

See you on the next post.

Monday, 22 January 2018

Installing Pentaho CE 8.0 with PostgreSQL on Ubuntu Server 16.04 LTS

Last week I had to install a Pentaho CE 8.0 instance on a Linux virtual machine. Ubuntu was the chosen distro and for many reasons I like to keep it simple, so instead of installing the desktop version I made the decision to run the software I needed on top of Ubuntu Server 16.04 LTS. Being the first time I had to install Pentaho CE I looked for articles to guide me through the process but unfortunately most of them were out-dated or using graphical interfaces that I didn’t had and didn’t want to add, otherwise I wouldn’t have chosen to go with the Ubuntu server version in the first place.

This post is the compilation of all the steps I had to perform to install the needed software. The steps below don’t cover the installation of the Ubuntu server distro itself, so its expected that you already have a (virtual) machine with the Ubuntu Server installed. Please also take in mind that I’ll be using almost all default passwords and configurations to keep this post the simplest as possible.
  1. Before we start
  2. Install Java Runtime 8
  3. Getting Pentaho Server 8.0 CE
  4. PostgreSQL installation
  5. Configuring Pentaho connection to PostgreSQL
  6. Running Pentaho Server

Before we start


# Update the apt-get index

sudo apt-get update

# If you are running a clean Ubuntu install I also recommend to upgrade the system

sudo apt-get upgrade

# Since we'll later need to unzip some files let's also install the unzip package.

sudo apt-get install unzip

Install Java Runtime 8


# This command will install the Java JRE package, specifically the OpenJDK 8

sudo apt-get install default-jre

# Next we need to set the JAVA_HOME environment variable, but before that we need to check where exactly Java is installed


sudo update-alternatives --config java

# Copy the Java installation path use an editor to open change the environment variables

sudo vim /etc/environment

# At the end of the file add the following line, making sure to replace the path with the one you copied above

JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64"

# Save the file and reload it. After that you can check if the environment variable is properly configured with a simple echo command

source /etc/environment
echo $JAVA_HOME

Getting Pentaho Server 8.0 CE


# Download the package using wget. Please take in mind that the name of the file is the most recent at the time of this writing. You can double check the correct link on sourceforge (https://sourceforge.net/projects/pentaho/files/).
wget https://sourceforge.net/projects/pentaho/files/Pentaho%208.0/server/pentaho-server-ce-8.0.0.0-28.zip/download -O pentaho-server-ce-8.0.0.0-28.zip

# When the download is completed we need to unzip the file and move the extracted pentaho-server directory to another directory. In this example I'll just put everything under /opt/pentaho

unzip pentaho-server-ce-8.0.0.0-28.zip
sudo mkdir /opt/pentaho
sudo pentaho-server /opt/pentaho

# Next we'll create a pentaho group and user on the machine and configure it to be the owner of the /opt/pentaho path

sudo addgroup pentaho
sudo adduser --system --ingroup pentaho --disabled-login pentaho
sudo chown -R pentaho:pentaho /opt/pentaho

PostgreSQL installation


# It is time to install the PostgreSQL instance. Once again we'll just use apt-get to install the required package
sudo apt-get install postgresql

# By default a postgres user will be created, without any password. So next step is to set the password for this user
# The following commands will open psql with postgres user, set its password and quit psql

sudo -u postgres psql postgres
\password
\q

# Now that we have a password set for the default user, we need to open the pg_hba.conf file using an editor and allow local connections.

sudo vim /etc/postgresql/9.5/main/pg_hba.conf

# Locate the line that looks like:

local all all peer

# Replace the peer method to md5, save and close the file and restart postgresql
sudo service postgresql restart

Configuring Pentaho connection to PostgreSQL


# Almost there but first we'll need to run some SQL scripts to create some needed databases on the PostgreSQL instance.

cd /opt/pentaho/pentaho-server/data/postgresql
psql -U postgres -h 127.0.0.1 -p 5432 -f create_jcr_postgresql.sql
psql -U postgres -h 127.0.0.1 -p 5432 -f create_quartz_postgresql.sql
psql -U postgres -h 127.0.0.1 -p 5432 -f create_repository_postgresql.sql

# Next we'll configure JDBC to use PostgreSQL

cd /opt/pentaho/pentaho-server/tomcat/webapps/pentaho/META-INF
sudo vim context.xml

# On both of the Resources configured in that XML file make sure to alter the driverClassName and url attributes so that your final result will look something like

<?xml version="1.0" encoding="UTF-8"?>
<Context path="/pentaho" docbase="webapps/pentaho/">
   <Resource name="jdbc/Hibernate" auth="Container" type="javax.sql.DataSource"
     factory="org.apache.tomcat.jdbc.pool.DataSourceFactory" maxActive="20" minIdle="0"
     maxIdle="5" initialSize="0" maxWait="10000" username="hibuser" password="password"
     driverClassName="org.postgresql.Driver"
     url="jdbc:postgresql://127.0.0.1:5432/hibernate" validationQuery="select count(*) FROM INFORMATION_SCHEMA.SYSTEM_SEQUENCES" />
   <Resource name="jdbc/Quartz" auth="Container" type="javax.sql.DataSource"
     factory="org.apache.tomcat.jdbc.pool.DataSourceFactory" maxActive="20" minIdle="0"
     maxIdle="5" initialSize="0" maxWait="10000" username="pentaho_user"
     password="password" driverClassName="org.postgresql.Driver"
     url="jdbc:postgresql://127.0.0.1:5432/quartz" validationQuery="select count(*) from INFORMATION_SCHEMA.SYSTEM_SEQUENCES" />
</Context>

Running Pentaho Server


# Start the Pentaho Server 8.0
cd /opt/pentaho/pentaho-server
sudo ./start-pentaho.sh

# If needed use ifconfig to find out your current IP address
ifconfig


On another machine just open a web browser and hit the 8080 port of the machine were Pentaho Server is running. If everything went as expected you should see the below screen.



Enjoy and see you on a next post.

Wednesday, 13 December 2017

Styling RadAutoCompleteTextView with Nativescript and Angular2

Lately I’ve been working a lot with Nativescript to develop mobile apps able to run in Android and iOS devices.

One of the apps that I just finished required me to use the nativescript-pro-ui controls, which proven themselves to be quite useful for sure. In this particular case I had to use the AutoComplete control. Everything was going ok until the moment I had to apply some styling to that control using CSS.

As expected I created a class in my stylesheet and applied it to the RadAutoCompleteTextView but unfortunately that wasn’t enough. Some of the properties such as the border color and width were applied correctly but for some reason font size and font face were just being ignored. After some research I found an opened issue on their Github repo and found out someone else with a similar problem. The presented workaround seemed to be using Nativescript Core instead of Angular2 so I just tried to do something similar.

Once again I had to face the fact that developing in frameworks such as NativeScript, Xamarin and others still require you to understand how Android and iOS native code works, otherwise you’ll have a real hard time trying to figure out how you can apply some transformations to the native controls that are generated when the app is running.

Below are some code snippets that worked like a charm. I think the code speaks for itself but in those I’m basically getting the references to the native controls on each platform and setting up the font face and the padding of the text field control that RadAutoCompleteTextView generates.

home.component.html

<RadAutoCompleteTextView [items]="customersList" class="form-input" hint="Search" suggestMode="Suggest" displayMode="Plain" (didAutoComplete)="onCustomerSelected($event)" (loaded)="onAutoCompleteLoad($event)">
    <SuggestionView tkAutoCompleteSuggestionView>
        <ng-template tkSuggestionItemTemplate let-customer="item">
            <StackLayout orientation="vertical" class="picker-entry">
                <Label [text]="customer.text"></Label>
            </StackLayout>
        </ng-template>
    </SuggestionView>
</RadAutoCompleteTextView>


home.component.ts

import { TokenModel, AutoCompleteEventData, RadAutoCompleteTextView } from "nativescript-pro-ui/autocomplete";

// (...)

public onAutoCompleteLoad(args: AutoCompleteEventData) {
    var autocomplete = args.object;

    if (isAndroid) {
        let rad = autocomplete.android;
        let nativeEditText = rad.getTextField();
        let currentApp = fs.knownFolders.currentApp();
        let fontPath = currentApp.path + "/fonts/futurat.ttf";

        nativeEditText.setTextSize(14);
        nativeEditText.setTypeface(android.graphics.Typeface.createFromFile(fontPath));
        nativeEditText.setPadding(7, 16, 7, 16);
     } else if (isIOS) {
        let rad = autocomplete.ios;
        let nativeEditText: UITextField = rad.textField;

        nativeEditText.font = UIFont.fontWithNameSize("FuturaT", 14);
        nativeEditText.leftView = new UIView({ frame: CGRectMake(7,16,7,16) });
        nativeEditText.leftViewMode = UITextFieldViewMode.Always;
     }
}


Happy coding and see you on a next post!

Tuesday, 14 November 2017

Cleaning up ASP.NET temp files with Powershell

One of the things that you must certainly have to deal with when developing ASP.NET applications in when you try to debug some piece of code but for some reason it seems that it keeps loading old versions of the assemblies and not the most recent build you just did.

This is certainly frustrating and one of the most common reasons for it to happen is due to the temporary ASP.NET files that are stored on your machine. Cleaning those is not hard at all and the following PowerShell script will do that for you.

Get-ChildItem "C:\Windows\Microsoft.NET\Framework\v*\Temporary ASP.NET Files" -Recurse | Remove-Item -Recurse
Get-ChildItem "$Env:Temp\Temporary ASP.NET Files" -Recurse | Remove-Item -Recurse


Not only it removes the temporary files that are stored under each of the framework version directories in your local windows installation folder, but it will also delete the ones that keep getting stored under your user AppData folders. This way you’ll be sure to delete files related to both IIS and IISExpress instances.

Just save the above in a .ps1 file and run it with full privileges every time it seems the debug session is not picking up the right version of the DLL.

Hope this is useful for you. See you on a next post.

Wednesday, 1 November 2017

Begin cross platform desktop development with Electron and Typescript

If you would like to create a cross platform desktop application and already have the skills and experience on web development using HTML, JavaScript and CSS, then Electron is probably what you are looking for.

Formerly known as Atom Shell and currently being developed by GitHub, this framework allows you to build desktop GUI application by leveraging some well-known technologies such as the Node.js run-time and Chromium.

Some well known examples of applications that were developed using Electron include the Discord client, GitHub desktop application, and also the Atom and VS Code editors, amongst many others.
Please follow the links to know more but right now, let’s start with our very own first Electron application, but instead of using plain vanilla JavaScript, let’s make leverage of another great language which is Typescript.

First thing to do will be to create a directory for your app and inside of it run the below commands in order to initialise npm and install electron and typescript packages. Additionally we can also install electron-reload which is a very useful package that allow us to see any changes done to our code on a running application without having to restart it.

npm init -y npm install electron --save 
npm install typescript electron-reload --save-dev

Now that the base setup is done we need to create the entry point for our application, configure the Typescript compiler and perform some additional changes to our package.json file in order to make our life easier whenever we need to run the application.

That being said, lets start by the creating a src directory under the root - not necessary but I just like to keep things organised this way - and the following two files inside of it:

main.html - this one will contain the markup for our main page. Its just a plain HTML, similar to those that you would create for any website.

<!DOCTYPE html>
<html>
    <head></head>
    <body>
        <h1>Hello Electron World!</h1>
    </body>
</html>


main.ts - which will contain the “entry point” code for our application. I think the code below speaks for itself. All we are doing for now is to open our main HTML page in a new window and setting some values for the size and position of it. We will also need to initialise the electron-reload package to listen to any changes to our code.

import { app, BrowserWindow } from "electron";
import * as electronReload from "electron-reload";

app.on("ready", () => {
    let mainWindow: BrowserWindow;

    mainWindow = new BrowserWindow({
        width: 1024,
        height: 768,
        center: true
    });

    // __dirname points to the "root" directory of the application
    // in this case it's the absolute path to "src"
    mainWindow.webContents.loadURL(`file://${ __dirname }/main.html`);
});

electronReload(__dirname);


One thing to notice is the way we load the page URL using the file:// protocol and the __dirname variable. We have to bear in mind that this is a desktop, not a web, application despite the fact we are using web technologies. This means that we are loading files located on our disk and not web addresses so we need a way to open those up and to know where they are located.

That’s exactly what we are doing here. The file:// protocol allow us to load a file stored on our device and __dirname points to the root executing directory of our application, which in this case is the src folder.

Now that we have our entry point files created we need to configure typescript by adding a tsconfig.json file in the root directory and copy the following:


    "compilerOptions": { 
        "rootDir": "./src", 
        "module": "commonjs", 
        "target": "es5" 
     } 
}

Finally update the package.json file to properly configure the entry point of the application and to add a new start script that will trigger the Typescript compiler and run our electron application afterwards.


    "name": "electron-tsc-app", 
    "version": "1.0.0", 
    "description": "", 
    "main": "src/main.js", 
    "scripts": { 
        "start": "tsc && electron ." 
    }, 
    "keywords": [], 
    "author": "", "license": "ISC", 
    "dependencies": { 
        "electron": "^1.7.9" 
    }, 
    "devDependencies": { 
        "electron-reload": "^1.2.2", 
        "typescript": "^2.5.3" 
    } 
}


We are now ready to start our app. All we have to do is to execute npm start from the command line and:



Our app is now running and since we included electron-reload in our application we’ll also be able to change the code and see those changes reflected in our application without having to restart it.

Nice, right? This is just the beginning. See you on a next post.