pmichaels.net Report : Visit Site


  • Ranking Alexa Global: # 1,462,688,Alexa Ranking in United Kingdom is # 198,618

    Server:cloudflare...

    The main IP address: 185.119.173.59,Your server -,- ISP:-  TLD:net CountryCode:-

    The description :a blog about one man's journey through code... and some pictures of the peak district...

    This report updates in 12-Jun-2018

Created Date:2014-04-18
Changed Date:2017-04-15

Technical data of the pmichaels.net


Geo IP provides you such as latitude, longitude and ISP (Internet Service Provider) etc. informations. Our GeoIP service found where is host pmichaels.net. Currently, hosted in - and its service provider is - .

Latitude: 0
Longitude: 0
Country: - (-)
City: -
Region: -
ISP: -

HTTP Header Analysis


HTTP Header information is a part of HTTP protocol that a user's browser sends to called cloudflare containing the details of what the browser wants and will accept back from the web server.

Expect-CT:max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
Content-Encoding:gzip
Transfer-Encoding:chunked
Vary:Accept-Encoding
Server:cloudflare
Connection:keep-alive
Link:; rel="https://api.w.org/", ; rel=shortlink
Date:Tue, 12 Jun 2018 07:07:50 GMT
CF-RAY:429a7b899d939200-EWR
Content-Type:text/html; charset=UTF-8

DNS

soa:ian.ns.cloudflare.com. dns.cloudflare.com. 2027760542 10000 2400 604800 3600
ns:ian.ns.cloudflare.com.
rachel.ns.cloudflare.com.
mx:MX preference = 0, mail exchanger = mail3.eqx.gridhost.co.uk.
ipv4:IP:104.24.113.232
ASN:13335
OWNER:CLOUDFLARENET - Cloudflare, Inc., US
Country:US
IP:104.24.112.232
ASN:13335
OWNER:CLOUDFLARENET - Cloudflare, Inc., US
Country:US
ipv6:2400:cb00:2048:1::6818:70e8//13335//CLOUDFLARENET - Cloudflare, Inc., US//US
2400:cb00:2048:1::6818:71e8//13335//CLOUDFLARENET - Cloudflare, Inc., US//US

HtmlToText

the long walk a blog about one man's journey through code… and some pictures of the peak district menu skip to content home about my work opinions azure console games online resources short walks – object locking in c# leave a reply while playing with azure event hubs , i decided that i wanted to implement a thread locking mechanism that didn’t queue. that is, i want to try and get a lock on the resource, and if it’s currently in use, just forget it and move on. the default behaviour in c# is to wait for the resource. for example, consider my method: static async task myprocedure() { console.writeline($"test1 {datetime.now}"); await task.delay(5000); console.writeline($"test2 {datetime.now}"); } i could execute this 5 times like so: static async task main(string[] args) { parallel.for(1, 5, (a) => { myprocedure(); }); console.readline(); } if i wanted to lock this (just bear with me and assume that makes sense for a minute), i might do this: private static object _lock = new object(); static async task main(string[] args) { parallel.for(1, 5, (a) => { //myprocedure(); lock(); }); console.readline(); } static void lock() { task.run(() => { lock (_lock) { myprocedure().getawaiter().getresult(); } }); } i re-jigged the code a bit, because you can’t await inside a lock statement, and obviously, just making the method call synchronous would not be locking the asynchronous call. so now, i’ve successfully made my asynchronous method synchronous. each execution of `myprocedure` will happen sequentially, and that’s because `lock` queues the locking calls behind one another. however, imagine the event hub scenario that’s referenced in the post above. i have, for example, a game, and it’s sending a large volume of telemetry up to the cloud. in my particular case, i’m sending a player’s current position. if i have a locking mechanism whereby the locks are queued then i could potentially get behind; and if that happens then, at best, the data sent to the cloud will be outdated and, at worse, it will use up game resources, potentially causing a lag. after a bit of research, i found an alterntive : private static object _lock = new object(); static async task main(string[] args) { parallel.for(1, 5, (a) => { //myprocedure(); //lock(); testtryenter(); }); console.readline(); } static async task testtryenter() { bool locktaken = false; try { monitor.tryenter(_lock, 0, ref locktaken); if (locktaken) { await myprocedure(); } else { console.writeline("could not get lock"); } } finally { if (locktaken) { monitor.exit(_lock); } } } so here, i try to get the lock, and if the resource is already locked, i simply give up and go home. there are obviously a very limited number of uses for this; however, my event hub scenario, described above, is one of them. depending on the type of data that you’re transmitting, it may make much more sense to have a go, and if you’re in the middle of another call, simply abandon the current one. this entry was posted in azure event hub , c# and tagged async , asynchronous , await , example , getawaiter , getresult , lock , parallel.for , synchronous , task.run , tryenter on june 2, 2018 by pcmichaels . playing with azure event hub leave a reply i’ve recently been playing with the azure event hub. this is basically a way of transmitting large amounts* of data between systems. in a later post, i may try and test these limits by designing some kind of game based on this. as a quick disclaimer, it’s worth bearing in mind that i am playing with this technology, and so much of the content of this post can be found in the links at the bottom of this post – you won’t find anything original here – just a record of my findings. you may find more (and more accurate) information in those. event hub namespace the first step, as with many azure services, is to create a namespace: for a healthy amount of data transference, you’ll pay around £10 per month. finally, we’ll create event hub within the namespace: when you create the event hub, it asks how many partitions you need. this basically splits the message delivery; and it’s clever enough to work out, if you have 3 partitions and two listeners that one should have two slots, and one, one slot: we’ll need an access policy so that we have permission to listen: new console apps we’ll need to create two applications: a producer and a consumer. let’s start with a producer. create a new console app and add this nuget library. here’s the code: class program { private static eventhubclient eventhubclient; private const string ehconnectionstring = "endpoint=sb://pcm-testeventhub.servicebus.windows.net/;sharedaccesskeyname=publisher;sharedaccesskey=key;entitypath=pcm-eventhub1"; private const string ehentitypath = "pcm-eventhub1"; public static async task main(string[] args) { eventhubsconnectionstringbuilder connectionstringbuilder = new eventhubsconnectionstringbuilder(ehconnectionstring) { entitypath = ehentitypath }; eventhubclient = eventhubclient.createfromconnectionstring(connectionstringbuilder.tostring()); while (true) { console.write("please enter message to send: "); string message = console.readline(); if (string.isnullorwhitespace(message)) break; await eventhubclient.sendasync(new eventdata(encoding.utf8.getbytes(message))); } await eventhubclient.closeasync(); console.writeline("press enter to exit."); console.readline(); } } consumer next we’ll create a consumer; so the first thing we’ll need is to grant permissions for listening: we’ll create a second new console application with this same library and the processor library, too . class program { private const string ehconnectionstring = "endpoint=sb://pcm-testeventhub.servicebus.windows.net/;sharedaccesskeyname=listener;sharedaccesskey=key;entitypath=pcm-eventhub1"; private const string ehentitypath = "pcm-eventhub1"; private const string storagecontainername = "eventhub"; private const string storageaccountname = "pcmeventhubstorage"; private const string storageaccountkey = "key"; private static readonly string storageconnectionstring = string.format("defaultendpointsprotocol=https;accountname={0};accountkey={1}", storageaccountname, storageaccountkey); static async task main(string[] args) { console.writeline("registering eventprocessor..."); var eventprocessorhost = new eventprocessorhost( ehentitypath, partitionreceiver.defaultconsumergroupname, ehconnectionstring, storageconnectionstring, storagecontainername); // registers the event processor host and starts receiving messages await eventprocessorhost.registereventprocessorasync<eventsprocessor>(); console.writeline("receiving. press enter to stop worker."); console.readline(); // disposes of the event processor host await eventprocessorhost.unregistereventprocessorasync(); } } class eventsprocessor : ieventprocessor { public task closeasync(partitioncontext context, closereason reason) { console.writeline($"processor shutting down. partition '{context.partitionid}', reason: '{reason}'."); return task.completedtask; } public task openasync(partitioncontext context) { console.writeline($"simpleeventprocessor initialized. partition: '{context.partitionid}'"); return task.completedtask; } public task processerrorasync(partitioncontext context, exception error) { console.writeline($"error on partition: {context.partitionid}, error: {error.message}"); return task.completedtask; } public task processeventsasync(partitioncontext context, ienumerable<eventdata> messages) { foreach (var eventdata in messages) { var data = encoding.utf8.getstring(eventdata.body.array, eventdata.body.offset, eventdata.body.count); console.writeline($"message received. partition: '{context.partitionid}', data: '{data}'"); } return context.checkpointasync(); } } as you can see, we can now transmit data through the event hub into client applications: footnotes *large, in terms of frequency, rather than volume – for example, transmitting a small message twice a second, rather than uploading a petabyte of data references https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-dotnet-standard-getstarted-send https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-dotnet-standard-getstarted-receive-eph this entry was posted in azure , azure event hub , c# and tagged azure , azure events , data , data transmission , data volume , event hub on may 21, 2018 by pcmichaels . what can you do with a logic app? part three – creating a logic app client leave a reply one of the things that are missing from azure logic apps is the ability to integrate human interaction. microsoft do have their own version of an interactive workflow (powerapps), which is (obviously) far better than what you can produce by following this post. in this post, we’ll create a very basic client for a logic app. obviously, with some thought, this could easily be extended to allow a fully functional, interactive, workflow system. basic logic app let’s start by designing our logic app. the app in question is going to be a very simple one. it’s format is going to be that it will add a message to a logging queue (just so it has something to do), then we’ll ask the user a question; and we’ll do this by putting a message onto a topic: left or right. based on the user’s response, we’ll either write a message to the queue saying left, or right. let’s have a look at our logic app design: it’s worth pointing out a few things about this design: 1. the condition uses the expression base64tostring() to convert the encoded message into plain text. 2. where the workflow picks up, it uses a peek-lock, and then completes the message at the end. it looks like it’s a ‘feature’ of logic apps that an automatic complete on this trigger will not actually complete the message (plus, this is actually a better design). queues and topics the “log to message queue” action above is putting an entry into a queue; so a quick note about why we’re using a queue for logging, and a topic for the interaction with the user. in a real life version of this system, we might have many users, but they might all want to perform the same action. let’s say that they all are part of a sales process, and the actions are actually actions along that process; adding these to a queue maintains their sequence. here’s the queue and topic layout that i’m using for this post: multiple triggers as you can see, we actually have two triggers in this workflow. the first starts the workflow (so we’ll drop a message into the topic to start it), and the second waits for a second message to go into the topic. to add a trigger part way through the workflow, simply add an action, search and select “triggers”: because we have a trigger part way through the workflow, what we have effectively issued here is an await statement. once a message appears in the subscription, the workflow will continue where it left off: as soon as a message is posted, the workflow carries on: client application for the client application, we could simply use the service bus explorer (in fact, the screenshots above were taken from using this to simulate messages in the topic). however, the point of this post is to create a client, and so we will… although we’ll just create a basic console app for now. we need the client to do two things: read from a topic subscription, and write to a topic. i haven’t exactly been here before, but i will be heavily plagiarising from here , here , and here . let’s create a console application: once that’s done, we’ll need the service bus client library: install it from here . the code is generally quite straight-forward, and looks a lot like the code to read and write to queues. the big difference is that you don’t read from a topic, but from a subscription to a topic (a topic can have many subscriptions): class program { static async task main(string[] args) { messagehandler messagehandler = new messagehandler(); messagehandler.registertoread("secondstage", "sub1"); await waitforever(); } private static async task waitforever() { while (true) await task.delay(5000); } } public class messagehandler { private string _connectionstring = "service bus connection string details"; private isubscriptionclient _subscriptionclient; public void registertoread(string topicname, string subscriptionname) { _subscriptionclient = new subscriptionclient(_connectionstring, topicname, subscriptionname); messagehandleroptions messagehandleroptions = new messagehandleroptions(exceptionreceived) { autocomplete = false, maxautorenewduration = new timespan(1, 0, 0) }; _subscriptionclient.registermessagehandler(processmessage, messagehandleroptions); } private async task processmessage(message message, cancellationtoken cancellationtoken) { string messagetext = encoding.utf8.getstring(message.body); console.writeline(messagetext); string leftorright = console.readline(); await _subscriptionclient.completeasync(message.systemproperties.locktoken); await sendresponse(leftorright, "userinput"); } private async task sendresponse(string leftorright, string topicname) { topicclient topicclient = new topicclient(_connectionstring, topicname); message message = new message(encoding.utf8.getbytes(leftorright)); await topicclient.sendasync(message); } private task exceptionreceived(exceptionreceivedeventargs arg) { console.writeline(arg.exception.tostring()); return task.completedtask; } } if we run it, then when the logic app reaches the second trigger, we’ll get a message from the subscription and ask directions: based on the response, the logic app will execute either the right or left branch of code. summary having worked with workflow systems in the past, one recurring feature of them is that they start to get used for things that don’t fit into a workflow, resulting in a needlessly over-complex system. i imagine that logic apps are no exception to this rule, and in 10 years time, people will roll their eyes at how logic apps have been used where a simple web service would have done the whole job. the saving grace here is source control. the workflow inside a logic app is simply a json file, and so it can be source controlled, added to a ci pipeline, and all the good things that you might expect. whether or not a more refined version of what i have described here makes any sense is another question. there are many downsides to this approach: firstly, you are fighting against the service bus by asking it to wait for input (that part is a very fixable problem with a bit of an adjustment to the messages); secondly, you would presumably need some form of timeout (again, a fixable problem that will probably feature in a future post). the biggest issue here is that you are likely introducing complex conditional logic with no way to unit test; this isn’t, per se, fixable; however, you can introduce some canary logic (again, this will probably be the feature of a future post). references https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-limits-and-config https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions https://stackoverflow.com/questions/28127001/the-lock-supplied-is-invalid-either-the-lock-expired-or-the-message-has-alread this entry was posted in azure , azure logic apps , azure service bus , c# , workflow and tagged azure , azure logic apps , azure service bus , client , logic app client , logic apps , queue , subscription , topic on may 6, 2018 by pcmichaels . what can you do with a logic app? part two – use excel to manage an e-mail notification system leave a reply in this post i started a series of posts covering different scenarios that you might use an azure logic app, and how you might go about that. in this, the second post, we’re going to set-up an excel spreadsheet that allows you simply add a row to an excel table and have a logic app act on that row. so, we’ll set-up a basic spreadsheet with an e-mail address, subject, text and a date we want it to send; then we’ll have the logic app send the next eligible mail in the list, and mark it as sent. spreadsheet i’ll first state that i do not have an office 365 subscription, and nothing that i do here will require one. we’ll create the spreadsheet in office online. head over to one drive (if you don’t have a one drive account then they are free) and create a new spreadsheet: in the spreadsheet, create a new table – just enter some headers (like below) and then highlight the columns and “insert table”: remember to check “my table has headers”. now enter some data: create the logic app in this post i showed how you can use visual studio to create and deploy a logic app; we’ll do that here: once we’ve created the logic app, we’ll need to select to create an action that will get the excel file that we created; in this case “list rows present in a table”: this also requires that we specify the table (if you’re using the free online version of excel then you’ll have to live with the table name you’re given ): loop this retrieves a list of rows, and so the next step is to iterate through them one-by-one. we’ll use a for-each: conditions okay, so we’re now looking at every row in the table, but we don’t want every row in the table, we only want the ones that have not already been sent, and the ones that are due to be sent (so the date is either today, or earlier). we can use a conditional statement for this: but we have two problems: azure logic apps are very bad at handling dates – that is to say, they don’t there is currently no way in an azure logic app to update an excel spreadsheet row (you can add and delete only) the former is easily solved, and the way i elected to solve the latter is to simply delete the row instead of updating it. it is possible to simply delete the current row, and add it back with new values; however, we won’t bother with that here. back to the date problem; what we need here is an azure function… creating an azure function here is the code for our function (see here for details of how to create one): [functionname("datescompare")] public static iactionresult run([httptrigger(authorizationlevel.function, "get", "post", route = null)]httprequest req, tracewriter log) { log.info("c# http trigger function processed a request."); string requestbody = new streamreader(req.body).readtoend(); return parsedates(requestbody); } public static iactionresult parsedates(string requestbody) { dynamic data = jsonconvert.deserializeobject(requestbody); datetime date1 = (datetime)data.date1; datetime date2 = datetime.fromoadate((double)data.date2); int returnflagindicator = 0; if (date1 > date2) { returnflagindicator = 1; } else if (date1 < date2) { returnflagindicator = -1; } return (actionresult)new okobjectresult(new { returnflag = returnflagindicator }); } there’s a few points to note about this code: 1. the date coming from excel extracts as a double, which is why we need to use fromoadate. 2. the reason to split the function up is so that the main logic can be more easily unit tested. if you ever need a reason for unit testing then try to work out why an azure function isn’t working inside a logic app! the logic around this function looks like this: we build up the request body with the information that we have, and then parse the output. finally, we can check if the date is in the past and then send the e-mail: lastly, as we said earlier, we’ll delete the row to ensure that the e-mail is only sent once: the eagle eyed and sane amongst you will notice that i’ve used the subject as a key. don’t do this – it’s very bad practice! references https://github.com/azure/azure-functions-host/wiki/azure-functions-runtime-2.0-known-issues this entry was posted in azure , azure functions , azure logic apps and tagged azure , azure function , dates , e-mail scheduler , excel , excel online , functions , logic apps on april 22, 2018 by pcmichaels . short walks – xunit warning leave a reply as with many of these posts – this is more of a “note to self”. say you have an assertion that looks something like this in your xunit test: assert.true(myenumerable.any(a => a.myvalue == "1234")); in later versions (not sure exactly which one this was introduced it), you’ll get the following warning: warning xunit2012: do not use enumerable.any() to check if a value exists in a collection. so, xunit has a nice little feature where you can use the following syntax instead: assert.contains(myenumerable, a => a.myvalue == "1234"); this entry was posted in c# , short walks , unit testing and tagged do not use enumerable.any() to check if a value exists in a collection , unit test , xunit on april 14, 2018 by pcmichaels . creating an azure logic app directly from visual studio leave a reply in previous posts, i’ve talked about setting up notification systems using logic apps, and shown how you might send tweets automatically . one of the things that sort of stands out with logic apps when you first come to them is how online they are. that is, you get a nice visual editor, and they run in the cloud. but what about when you want them to be part of your source controlled solution? well, the good news (and the reason for this article), is that logic apps are, behind the scenes, just json files with a nice designer. you can certainly source control the underlying files; but you can also edit these, with the same designer you get on the web, directly in vs. logic app tools for visual studio microsoft have kindly released a vs add-on that will allow you to edit logic apps inside visual studio . you will also need the azure workload installed in visual studio. creating azure resources inside vs is done by creating a new azure resource group: you then get asked what type: this creates you a template with a deploy (powershell) script and two json files. now that you have the vs add-on described above you can edit these json files in a visual designer: the logic apps designer that appears gives you exactly the same experience that you have on the cloud: deploy deploying this solution to azure is very simple using the context menu: you can monitor the output window to see the status: finally, a quick look in the azure portal validates that it has, indeed, deployed: and a quick look on twitter confirms that it works: what about automatic deployment? the process above works great, but isn’t very good inside any kind of continuous integration pipeline. fortunately, we have a ready-made deploy script (remember the helpful powershell script above?). there’s a couple of tweaks to run this out of the box: – if you didn’t call the resource group the correct name when you created the project, you can correct this inside the script: – you’ll need to log into azure first: login-azurermaccount this displays a log-in prompt and allows you to log-in (hold that question for a minute). – finally: .\deploy-azureresourcegroup.ps1 -resourcegrouplocation westus obviously, the appropriate location might not be westus (it isn’t for me) back to your question – which is that, if this log-in displays a prompt then it’s not very automated there’s a couple of ways to solve this, but this is the easiest*: login-azurermaccount and enter your credentials manually. then: save-azurermprofile -force -path "c:\myprofile.json" that exports the details of your profile; finally, when you need to automatically log-in: select-azurermprofile -path "c:\myprofile.json" now you’re logged on just like before. edit logic apps now that our app is in azure, we can edit it: this opens the same window; and, if you didn’t have the code, it can be downloaded here: it’s worth noting that the version you can edit in cloud explorer is not the same version that you are editing inside the project. debugging finally, using the run history, you can view the successful (and failed) runs: curiously, this logic app is fundamentally flawed (i didn’t realise this restriction in twitter): easily fixed, though: please don’t try that at home. i don’t want a cease and desist order from twitter ! footnotes *easiest is not necessarily the best refences https://docs.microsoft.com/en-us/azure/logic-apps/quickstart-create-logic-apps-with-visual-studio https://docs.microsoft.com/en-us/azure/logic-apps/manage-logic-apps-with-visual-studio https://docs.microsoft.com/en-us/powershell/module/azurerm.resources/new-azurermresourcegroup?view=azurermps-5.6.0 https://stackoverflow.com/questions/37249623/how-to-login-without-prompt this entry was posted in azure , azure logic apps , source control , visual studio and tagged azure , azure logic apps , ci , continuous integration , json , logic apps , logic apps designer , login-azurermaccount , save-azurermprofile , select-azurermprofile , source control on april 8, 2018 by pcmichaels . short walks – submit a single row of data in reactjs leave a reply while looking into the react sample app , i came across a scenario whereby you might need to pass a specific piece of data across to an event handler. a lot of the online examples cover data state; but what happens when you have a situation such as the one in the sample app; consider this: in this instance, you want to pass the temperature of the line you’ve selected. the solution is quite simple, and documented here : private renderforecaststable(forecasts: weatherforecast[]) { return <table classname='table'> <thead> <tr> <th>date</th> <th>temp. (c)</th> <th>temp. (f)</th> <th>summary</th> </tr> </thead> <tbody> {forecasts.map(forecast => <tr key={ forecast.dateformatted }> <td>{ forecast.dateformatted }</td> <td>{ forecast.temperaturec }</td> <td>{ forecast.temperaturef }</td> <td>{forecast.summary}</td> <td><button onclick={(e) => this.handleclick(e, forecast)}>log temperature!</button></td> </tr> )} </tbody> </table>; } here, we’re passing the entire forecast object to the handler; which looks like this: handleclick = (event: react.formevent<htmlbuttonelement>, forecast: weatherforecast) => { console.log("timestamp: " + event.timestamp); console.log("data: " + forecast.temperaturec); } https://reactjs.org/docs/forms.html https://reactjs.org/docs/handling-events.html this entry was posted in reactjs , short walks and tagged javascript , reactjs , typescript on march 31, 2018 by pcmichaels . adding a new screen to the react template project leave a reply in this post i started looking into reactjs. following getting the sample project running, i decided that i’ve try adding a new screen. since it didn’t go as smoothly as i expected, i’ve documented my adventures. the target of this post is to create a new screen, using the sample project inside visual studio. step 1 create a brand new project for react: if you run this out of the box (if you can’t because of missing packages then see this article ), you’ll get a screen that looks like this: step 2 add a new tsx file to the components: here’s some code to add into this new file: import * as react from 'react'; import { routecomponentprops } from 'react-router'; export class newscreen extends react.component<routecomponentprops<{}>, {}> { public render() { return <div> <h1>new screen test</h1> </div>; } } the javascript as html above is one of the things that makes reactjs an appealing framework. combine that with typescript, and you get a very xaml feel to the whole web application. step 3 add a link to the navigation screen (navmenu.tsx): <div classname='navbar-collapse collapse'> <ul classname='nav navbar-nav'> <li> <navlink to={ '/' } exact activeclassname='active'> <span classname='glyphicon glyphicon-home'></span> home </navlink> </li> <li> <navlink to={ '/counter' } activeclassname='active'> <span classname='glyphicon glyphicon-education'></span> counter </navlink> </li> <li> <navlink to={ '/fetchdata' } activeclassname='active'> <span classname='glyphicon glyphicon-th-list'></span> fetch data </navlink> </li> <li> <navlink to={'/newscreen'} activeclassname='active'> <span classname='glyphicon glyphicon-th-list'></span> new screen </navlink> </li> </ul> </div> if you run this now, you’ll see the navigation entry, but clicking on it will give you a blank screen. it is just that scenario that motivated this post! step 4 finally, the routes.tsx file needs updating so that it knows which screen to load when: import * as react from 'react'; import { route } from 'react-router-dom'; import { layout } from './components/layout'; import { home } from './components/home'; import { fetchdata } from './components/fetchdata'; import { counter } from './components/counter'; import { newscreen } from './components/newscreen'; export const routes = <layout> <route exact path='/' component={ home } /> <route path='/counter' component={ counter } /> <route path='/fetchdata' component={fetchdata} /> <route path='/newscreen' component={newscreen} /> </layout>; this entry was posted in html5 , javascript , reactjs , typescript and tagged html , javascript , reactjs , sample project , typescript on march 25, 2018 by pcmichaels . using nsubstitute for partial mocks leave a reply i have previously written about how to, effectively, subclass using nsubstitute; in this post, i’ll cover how to partially mock out that class. before i get into the solution; what follows is a workaround to allow badly written, or legacy code to be tested without refactoring. if you’re reading this and thinking you need this solution then my suggestion would be to refactor and use some form of dependency injection. however, for various reasons, that’s not always possible (hence this post). here’s our class to test: public class myfunkyclass { public virtual void methodone() { throw new exception("i do some direct db access"); } public virtual int methodtwo() { throw new exception("i do some direct db access and return a number"); return new random().next(5); } public virtual int methodthree() { methodone(); if (methodtwo() <= 3) { return 1; } return 2; } } the problem okay, so let’s write our first test: [fact] public void test1() { // arrange myfunkyclass myfunkyclass = new myfunkyclass(); // act int result = myfunkyclass.methodthree(); // assert assert.equal(2, result); } so, what’s wrong with that? well, we have some (simulated) db access, so the code will error. not the but a solution the first thing to do here is to mock out methodone(), as it has (pseudo) db access: [fact] public void test1() { // arrange myfunkyclass myfunkyclass = substitute.forpartsof<myfunkyclass>(); myfunkyclass.when(a => a.methodone()).donotcallbase(); // act int result = myfunkyclass.methodthree(); // assert assert.equal(2, result); } running this test now will fail with: message: system.exception : i do some direct db access and return a number we’re past the first hurdle. we can presumably do the same thing for methodtwo: [fact] public void test1() { // arrange myfunkyclass myfunkyclass = substitute.forpartsof<myfunkyclass>(); myfunkyclass.when(a => a.methodone()).donotcallbase(); myfunkyclass.when(a => a.methodtwo()).donotcallbase(); // act int result = myfunkyclass.methodthree(); // assert assert.equal(2, result); } now when we run the code, the test still fails, but it no longer accesses the db: message: assert.equal() failure expected: 2 actual: 1 the problem here is that, even though we don’t want methodtwo to execute, we do want it to return a predefined result. once we’ve told it not to call the base method, you can then tell it to return whatever we choose (there are separate events – see the bottom of this post for a more detailed explanation of why); for example: [fact] public void test1() { // arrange myfunkyclass myfunkyclass = substitute.forpartsof<myfunkyclass>(); myfunkyclass.when(a => a.methodone()).donotcallbase(); myfunkyclass.when(a => a.methodtwo()).donotcallbase(); myfunkyclass.methodtwo().returns(5); // act int result = myfunkyclass.methodthree(); // assert assert.equal(2, result); } and now the test passes. tldr – what is this actually doing? to understand this better; we could do this entire process manually. only when you’ve felt the pain of a manual mock, can you really see what mocking frameworks such as nsubtitute are doing for us. let’s assume that we don’t have a mocking framework at all, but that we still want to test methodthree() above. one approach that we could take is to subclass myfunkyclass, and then test that subclass: here’s what that might look like: class myfunkyclasstest : myfunkyclass { public override void methodone() { //base.methodone(); } public override int methodtwo() { //return base.methodtwo(); return 5; } } as you can see, now that we’ve subclassed myfunkyclass, we can override the behaviour of the relevant virtual methods. in the case of methodone, we’ve effectively issued a donotcallbase(), (by not calling base!). for methodtwo, we’ve issued a donotcallbase, and then a returns statement. let’s add a new test to use this new, manual method: [fact] public void test2() { // arrange myfunkyclasstest myfunkyclasstest = new myfunkyclasstest(); // act int result = myfunkyclasstest.methodthree(); // assert assert.equal(2, result); } that’s much cleaner – why not always use manual mocks? it is much cleaner if you always want methodthree to return 5. once you need it to return 2 then you have two choices, either you create a new mock class, or you start putting logic into your mock. the latter, if done wrongly can end up with code that is unreadable and difficult to maintain; and if done correctly will end up in a mini version of nsubstitute. finally, however well you write the mocks, as soon as you have more than one for a single class then every change to the class (for example, changing a method’s parameters or return type) results in a change to more than one test class. it’s also worth mentioning again that this problem is one that has already been solved, cleanly, by dependency injection. this entry was posted in c# , unit testing and tagged c# , dependency injection , donotcallbase , forpartsof , manual mock , mock , mocking framework , nsubstitute , partial mock , returns , xunit on march 22, 2018 by pcmichaels . forcing an npm restore leave a reply i’ve recently started looking into the javascript library reactjs . having read a couple of tutorials and watched the start of a pluralsight video, i did the usual and started creating a sample application. the reactjs template in vs is definitely a good place to start; however, the first issue that i came across was with npm. upon creating a new web application, i was faced with the following errors: the reason being that, unlike nuget, npm doesn’t seem to sort your dependencies out automatically. after playing around with it for a while, this is my advice to my future self on how to deal with such issues. the best way for force npm to restore your packages seems to be to call npm install either from powershell, or from the package manager console inside vs. powershell on running this, i found that, despite getting the error shown above, the packages were still restored; however, you can trash that file: following that, delete the node_modules directory and re-run, and there are no errors: package manager console in package manager console, ensure that you’re in the right directory (you’ll be in the solution directory by default, which is the wrong directory): references https://stackoverflow.com/questions/12866494/how-do-you-reinstall-an-apps-dependencies-using-npm this entry was posted in javascript , reactjs and tagged javascript , npm , reactjs , typescript on march 18, 2018 by pcmichaels . post navigation ← older posts search for: recent posts short walks – object locking in c# playing with azure event hub what can you do with a logic app? part three – creating a logic app client what can you do with a logic app? part two – use excel to manage an e-mail notification system short walks – xunit warning recent comments norgie on using entity framework with ioc wakhid edy on building block game in unity 3d pcmichaels on using entity framework core with dbfirst camilo c on using entity framework core with dbfirst danielle paquette-harvey on the server principal “server” is not able to access the database “dbname” under the current security context. archives june 2018 may 2018 april 2018 march 2018 february 2018 january 2018 december 2017 november 2017 october 2017 september 2017 august 2017 july 2017 june 2017 may 2017 april 2017 march 2017 february 2017 january 2017 december 2016 november 2016 october 2016 september 2016 august 2016 july 2016 june 2016 may 2016 april 2016 march 2016 february 2016 january 2016 december 2015 november 2015 october 2015 september 2015 august 2015 july 2015 june 2015 may 2015 april 2015 march 2015 february 2015 january 2015 december 2014 november 2014 october 2014 september 2014 august 2014 july 2014 june 2014 may 2014 april 2014 march 2014 february 2014 january 2014 december 2013 november 2013 october 2013 september 2013 june 2013 may 2013 march 2013 categories .net core .net framework activemq agile android animation asp.net asp.net core async azure azure cognitive services azure event hub azure functions azure logic apps azure service bus azure service fabric blogging c# cloud architecture coded ui tests compiler errors conference configuration console applications css custom control database design design patterns dos entity framework entity framework core error errors game development google cloud platform hardware html5 iis introduction ios iphone javascript lambda linq message queue microsoft cognitive services monogame mvc mvvm mvvm cross mvvm light new language features nunit ocr opinions performance phone portable class libraries product review rabbitmq reactjs reflection rest serverless architecture short walks software design patterns source control sql sql server sqlite swift teaching templates test tfs typescript unit testing unity universal apps universal windows platform vb.net visual studio wcf web api windows windows 10 windows 8 windows 8.1 windows phone windows store apps winjs winrt wordpress workflow wpf xamarin xamarin forms xaml xcode xml xna meta log in entries rss comments rss wordpress.org proudly powered by wordpress

URL analysis for pmichaels.net


https://www.pmichaels.net/category/swift/
https://www.pmichaels.net/category/winjs/
https://www.pmichaels.net/wp-content/uploads/2018/04/logic-app-2-8.png
https://www.pmichaels.net/2015/01/
https://www.pmichaels.net/category/message-queue/
https://www.pmichaels.net/wp-content/uploads/2018/04/logic-app-2-1.png
https://www.pmichaels.net/category/async/
https://www.pmichaels.net/category/wcf/
https://www.pmichaels.net/wp-content/uploads/2018/05/azure-event-hub-6.png
https://www.pmichaels.net/category/cloud-architecture/
https://www.pmichaels.net/category/net-framework/
https://www.pmichaels.net/2018/03/11/what-can-you-do-with-a-logic-app-part-one-send-tweets-at-random-intervals-based-on-a-defined-data-set/
https://www.pmichaels.net/2018/01/
https://www.pmichaels.net/wp-content/uploads/2018/04/azure-logic-apps-16.png
https://www.pmichaels.net/category/nunit/

Whois Information


Whois is a protocol that is access to registering information. You can reach when the website was registered, when it will be expire, what is contact details of the site with the following informations. In a nutshell, it includes these informations;

Domain Name: PMICHAELS.NET
Registry Domain ID: 1855215378_DOMAIN_NET-VRSN
Registrar WHOIS Server: whois.paragonnames.net
Registrar URL: http://paragonnames.com
Updated Date: 2017-04-15T01:44:26Z
Creation Date: 2014-04-18T08:59:57Z
Registry Expiry Date: 2018-04-18T08:59:57Z
Registrar: Paragon Internet Group Ltd t/a Paragon Names
Registrar IANA ID: 1860
Registrar Abuse Contact Email: [email protected]
Registrar Abuse Contact Phone: +44.2031375790
Domain Status: clientTransferProhibited https://icann.org/epp#clientTransferProhibited
Domain Status: clientUpdateProhibited https://icann.org/epp#clientUpdateProhibited
Name Server: NS1.TSOHOST.CO.UK
Name Server: NS2.TSOHOST.CO.UK
Name Server: NS3.TSOHOST.CO.UK
DNSSEC: unsigned
URL of the ICANN Whois Inaccuracy Complaint Form: https://www.icann.org/wicf/
>>> Last update of whois database: 2018-03-15T23:29:07Z <<<

For more information on Whois status codes, please visit https://icann.org/epp

NOTICE: The expiration date displayed in this record is the date the
registrar's sponsorship of the domain name registration in the registry is
currently set to expire. This date does not necessarily reflect the expiration
date of the domain name registrant's agreement with the sponsoring
registrar. Users may consult the sponsoring registrar's Whois database to
view the registrar's reported date of expiration for this registration.

TERMS OF USE: You are not authorized to access or query our Whois
database through the use of electronic processes that are high-volume and
automated except as reasonably necessary to register domain names or
modify existing registrations; the Data in VeriSign Global Registry
Services' ("VeriSign") Whois database is provided by VeriSign for
information purposes only, and to assist persons in obtaining information
about or related to a domain name registration record. VeriSign does not
guarantee its accuracy. By submitting a Whois query, you agree to abide
by the following terms of use: You agree that you may use this Data only
for lawful purposes and that under no circumstances will you use this Data
to: (1) allow, enable, or otherwise support the transmission of mass
unsolicited, commercial advertising or solicitations via e-mail, telephone,
or facsimile; or (2) enable high volume, automated, electronic processes
that apply to VeriSign (or its computer systems). The compilation,
repackaging, dissemination or other use of this Data is expressly
prohibited without the prior written consent of VeriSign. You agree not to
use electronic processes that are automated and high-volume to access or
query the Whois database except as reasonably necessary to register
domain names or modify existing registrations. VeriSign reserves the right
to restrict your access to the Whois database in its sole discretion to ensure
operational stability. VeriSign may restrict or terminate your access to the
Whois database for failure to abide by these terms of use. VeriSign
reserves the right to modify these terms at any time.

The Registry database contains ONLY .COM, .NET, .EDU domains and
Registrars.

  REGISTRAR Paragon Internet Group Ltd t/a Paragon Names

SERVERS

  SERVER net.whois-servers.net

  ARGS domain =pmichaels.net

  PORT 43

  TYPE domain

DOMAIN

  NAME pmichaels.net

  CHANGED 2017-04-15

  CREATED 2014-04-18

STATUS
clientTransferProhibited https://icann.org/epp#clientTransferProhibited
clientUpdateProhibited https://icann.org/epp#clientUpdateProhibited

NSERVER

  NS1.TSOHOST.CO.UK 195.62.28.14

  NS2.TSOHOST.CO.UK 95.142.155.4

  NS3.TSOHOST.CO.UK 95.142.154.15

  REGISTERED yes

Go to top

Mistakes


The following list shows you to spelling mistakes possible of the internet users for the website searched .

  • www.upmichaels.com
  • www.7pmichaels.com
  • www.hpmichaels.com
  • www.kpmichaels.com
  • www.jpmichaels.com
  • www.ipmichaels.com
  • www.8pmichaels.com
  • www.ypmichaels.com
  • www.pmichaelsebc.com
  • www.pmichaelsebc.com
  • www.pmichaels3bc.com
  • www.pmichaelswbc.com
  • www.pmichaelssbc.com
  • www.pmichaels#bc.com
  • www.pmichaelsdbc.com
  • www.pmichaelsfbc.com
  • www.pmichaels&bc.com
  • www.pmichaelsrbc.com
  • www.urlw4ebc.com
  • www.pmichaels4bc.com
  • www.pmichaelsc.com
  • www.pmichaelsbc.com
  • www.pmichaelsvc.com
  • www.pmichaelsvbc.com
  • www.pmichaelsvc.com
  • www.pmichaels c.com
  • www.pmichaels bc.com
  • www.pmichaels c.com
  • www.pmichaelsgc.com
  • www.pmichaelsgbc.com
  • www.pmichaelsgc.com
  • www.pmichaelsjc.com
  • www.pmichaelsjbc.com
  • www.pmichaelsjc.com
  • www.pmichaelsnc.com
  • www.pmichaelsnbc.com
  • www.pmichaelsnc.com
  • www.pmichaelshc.com
  • www.pmichaelshbc.com
  • www.pmichaelshc.com
  • www.pmichaels.com
  • www.pmichaelsc.com
  • www.pmichaelsx.com
  • www.pmichaelsxc.com
  • www.pmichaelsx.com
  • www.pmichaelsf.com
  • www.pmichaelsfc.com
  • www.pmichaelsf.com
  • www.pmichaelsv.com
  • www.pmichaelsvc.com
  • www.pmichaelsv.com
  • www.pmichaelsd.com
  • www.pmichaelsdc.com
  • www.pmichaelsd.com
  • www.pmichaelscb.com
  • www.pmichaelscom
  • www.pmichaels..com
  • www.pmichaels/com
  • www.pmichaels/.com
  • www.pmichaels./com
  • www.pmichaelsncom
  • www.pmichaelsn.com
  • www.pmichaels.ncom
  • www.pmichaels;com
  • www.pmichaels;.com
  • www.pmichaels.;com
  • www.pmichaelslcom
  • www.pmichaelsl.com
  • www.pmichaels.lcom
  • www.pmichaels com
  • www.pmichaels .com
  • www.pmichaels. com
  • www.pmichaels,com
  • www.pmichaels,.com
  • www.pmichaels.,com
  • www.pmichaelsmcom
  • www.pmichaelsm.com
  • www.pmichaels.mcom
  • www.pmichaels.ccom
  • www.pmichaels.om
  • www.pmichaels.ccom
  • www.pmichaels.xom
  • www.pmichaels.xcom
  • www.pmichaels.cxom
  • www.pmichaels.fom
  • www.pmichaels.fcom
  • www.pmichaels.cfom
  • www.pmichaels.vom
  • www.pmichaels.vcom
  • www.pmichaels.cvom
  • www.pmichaels.dom
  • www.pmichaels.dcom
  • www.pmichaels.cdom
  • www.pmichaelsc.om
  • www.pmichaels.cm
  • www.pmichaels.coom
  • www.pmichaels.cpm
  • www.pmichaels.cpom
  • www.pmichaels.copm
  • www.pmichaels.cim
  • www.pmichaels.ciom
  • www.pmichaels.coim
  • www.pmichaels.ckm
  • www.pmichaels.ckom
  • www.pmichaels.cokm
  • www.pmichaels.clm
  • www.pmichaels.clom
  • www.pmichaels.colm
  • www.pmichaels.c0m
  • www.pmichaels.c0om
  • www.pmichaels.co0m
  • www.pmichaels.c:m
  • www.pmichaels.c:om
  • www.pmichaels.co:m
  • www.pmichaels.c9m
  • www.pmichaels.c9om
  • www.pmichaels.co9m
  • www.pmichaels.ocm
  • www.pmichaels.co
  • pmichaels.netm
  • www.pmichaels.con
  • www.pmichaels.conm
  • pmichaels.netn
  • www.pmichaels.col
  • www.pmichaels.colm
  • pmichaels.netl
  • www.pmichaels.co
  • www.pmichaels.co m
  • pmichaels.net
  • www.pmichaels.cok
  • www.pmichaels.cokm
  • pmichaels.netk
  • www.pmichaels.co,
  • www.pmichaels.co,m
  • pmichaels.net,
  • www.pmichaels.coj
  • www.pmichaels.cojm
  • pmichaels.netj
  • www.pmichaels.cmo
Show All Mistakes Hide All Mistakes