Welcome!

@DevOpsSummit Authors: Stackify Blog, Automic Blog, Pat Romanski, Don MacVittie, Liz McMillan

Related Topics: @DevOpsSummit, Java IoT, Containers Expo Blog

@DevOpsSummit: Blog Feed Post

'How I Wrote My First Useful Node Project' By @MarkoBonaci | @DevOpsSummit #DevOps

It was a warm Thursday, sometime in May 2015...

Growing a Beard (or "How I Wrote My First Useful Node Project")
By Marko Bonaći

Stage setting: Camera is positioned above the treetop of one of three tall poplars. It looks down on the terrace of a pub. It’s evening, but there’s still enough light to see that the terrace is sparsely populated.

Camera slowly moves down towards a specific table in the corner…

As the camera moves down, an old, crummy typewriter font appears on the screen, typing with distinct sound. It spells:

May 2015, somewhere in Germany…

The frame shows four adult males seating at the table. They sip their beers slowly, except for one of them. The camera focuses on him, as he hits a large German 1 liter pint in just two takes. On the table there’s a visible difference in the number of empty beer mugs in front of him and others. After a short silence, the heavy drinker says: (quickly, like he’s afraid that someone’s going to interrupt him, with facial expression like he’s in a confession):

“I still use grep to search through logs”.

As the sentence hit the eardrums of his buddies, a loud sound of overwhelming surprise involuntarily leaves their mouths. They notice that it made every guest turn to their table and the terrace fell into complete silence. The oldest one amongst them reacts quickly, like he wants no one to hear what he just heard, he turns towards the rest of the terrace and makes the hand waving motion, signaling that everything is fine. The sound of small talk and “excellent” German jokes once again permeates the terrace.

He, in fact, very well knew that it isn’t all fine. A burning desire to right this wrong grew somewhere deep within his chest. Camera focuses on this gentleman and starts to come increasingly closer to his chest.  When it hits the chest, {FX start} the camera enters inside, beneath the ribs. We see his heart pumping wildly. Camera goes even deeper, and enters the heart’s atrium, where we see buckets of blood leaving to quickly replenish the rest of the body in this moment of great need {FX end}.

The camera frame closes to a single point in the center the screen.

A couple of weeks later, we see a middle aged Croatian in his kitchen, whistling some unrecognized song while making Nescafe Creme and a secret Croatian vitamin drink called Cedevita.

Now camera shows him sitting at his desk and focuses on his face, “en face”.

He begins to tell his story…

“It was a warm Thursday, sometime in May 2015. My first week at Sematext was coming to end. I still remember, I was doing some local, on-ramping work, nothing remotely critical, when my boss asked me to leave everything aside. He had a new and exciting project for me. He allegedly found out that even the biggest proponent of centralized log management, Sematext, hides a person who still uses SSH+grep in its ranks.

The task was to design and implement an application that would let Logsene users access their logs from the command line (L-CLI from now on). I mentioned in my Sematext job interview that, besides Apache Spark (which was to be my main responsibility), I’d like to work with Node.js, if the opportunity presented itself. And here it was…”

What is Logsene?
Good thing you asked. Let me give you a bit of context, in case you don’t know what Logsene is. Logsene is a web application that’s used to find your way through piles of log messages. Our customers send us huge amounts of log messages, which are then collected into one of our Elasticsearch clusters (hereinafter ES). The system (built entirely out of open source components) is basically processing logs in near-real-time, so after the logs are safely stored and indexed in ES, they are immediately visible in Logsene. Here’s what the Logsene UI looks like:

Logsene_3

See those two large fields in the figure above? One for search query and the other for time range? Yes? Well, that was basically what my application needed to provide, only instead of web UI, users would use command-line interface.

So what was the problem? Why am I telling you all this, you might be wondering. Well, I was not exactly what you would call a Node expert at the time. I was more of a Node fanboy than anything else. I mean yes, I was following developments in the Node.js community almost from the start and I desperately wanted to work with it, but, in spite of all my “Node evangelism” at my previous workplace, I was not able to push through a single Node project, and thus I did not have any Node applications in production, and that’s how you gain the right kind of experience. I only played with it in my free time.

MB

The Start Was Rough
On top of all that, I purchased a lovely Macbook Pro just a couple of weeks ago and having previously worked only on Windows and Ubuntu, I needed to google in order to find solutions for most trivial stuff, especially improvements I wanted to make to my development workflow.

Which IDE do I choose for developing in node? Which text editor do I use for a lighter stuff?

I previously tried Brackets and Atom and I had a feeling that they were both somehow better suited for front end stuff. I also used WebStorm previously, so I downloaded a trial version to check out what juicy improvements have been added since I last used it. None, as far as I could tell. But once I started using it, there was no going back. Though Node support could certainly be substantially better (the only Node project template in WebStorm is “Express app”).

Those were all sweet worries, and I enjoyed every step of it, but then it quickly came time to actually do something useful.

How are CLI applications even developed in Node? I found a couple of articles and one of them led me to this package called Ronin, which is like a small framework for developing CLI applications. It allows you to structure your shell commands by simply using folder hierarchy.

CLI-post_ronin

So, looking at the figure above, we have the following commands:

logsene search

logsene config get

logsene config set

In order for a function to be considered a command, it needs to extend the base object ronin.Command, like this:

CLI_post_2

The run function is executed when the client invokes the command. In this example, logsene search would trigger the run command you see in the figure above.

Ronin also has this useful notion of a middleware, which avoids code duplication by providing a function (placed in the middleware folder) that you “use” in your command definition (the figure above shows that the search command uses two middleware functions, session and auth). All middleware functions that you “use” are going to be executed, in the order that you list them, before your command’s main, run function.

Besides the run function, there’s also a help function that is simply used to output command’s usage information (e.g. logsene search –help).

CLI_post_3

The First Setback
The first real setback I had was a feature that required that different people, using a shared OS user and a shared Sematext account at the same time, being SSHd into the same box.  Many organizations, in order to simplify maintenance, have this kind of “single gateway” setup behind their firewalls.

The problem was that in order to facilitate one of the most important requirements — the ability to pipe search output to other *nix commands (e.g. grep, awk, cut, sed, tail, less, …) — each command needed to be its own, new Node process, i.e. the Node process starts and ends with a single command.

logsene search ERROR | awk '$4 ~/kern/'

[The command searches last hour (default) of logs for the term ERROR and pipes the output to awk, which prints out only those log entries that originated from the kernel.]

In other words, it wasn’t possible to use my own REPL, with my own logsene> prompt, where each user’s session would simply be bound to the duration of the node process, i.e. the REPL. In that case, handling a user session would be as simple as writing and reading environment variables. The problem is that environment variables, written from a Node process (or any other process, for that matter) live only as long as the process that wrote them lives. When the process exits, environment variables that the process created are purged from the environment.

That disqualified environment variables as the means of session persistence across multiple commands.

The whole point of the session was to spare users the trouble of logging in with each new command they execute. I needed a way to establish a session environment that would be uniquely identifiable and bound to a specific user session, from multiple, serially executed node processes, i.e., commands.

I searched far and wide through all of them: npm (what a terrible search experience), nipster (better), npmsearch, node-modules (not bad), nodejsmodules (which forces https, but doesn’t even have a proper SSL certificate), gitreview, cnpmjs and libraries.io but was not able to find anything related to my problem. I decided that, from then on, I was going to use Google (possibly sometimes nipster and node-modules) to search for packages.

The other thing that this “package searching” experience has shown is that not a single search solution out there was designed by a search engine expert (no filters, no facets, no date ranges, no combining tags with queries, not even result sorting,  …). Really terrible. I hope to convince my boss that we donate some time to the Node community and put up a proper search engine. Imagine if you could search for a term and get results together with facets, which would allow you to further filter the results on author, number of dependent modules, recent usage, number of GH stars, date of the last commit, exclude packages without GH repo, …

It was a long road and in the end it again turned out that the simplest solutions are the best ones. I used the fact that all SSH sessions have unique SSH_TTY environment variable, even if the same OS user is used by different SSH sessions. So I decided to use Linux username to identify user that’s physically working at the workstation and SSH_TTY to identify SSHd users, no matter how many of them are SSHd to the box in parallel. This insight was the turning point, as it made possible to solve the whole problem with a single line of code (using configstore NPM package):

var conf = new Configstore('logsene' + (process.env.SSH_TTY || '/' + osenv.user()));

Configstore is a package (used by Yeoman, amongst others) to persist information inside user’s home/.config directory, in configuration file whose relative path and name you provide as parameter.

To illustrate how that works, imagine there are currently four users working with L-CLI, all using the same OSX user account, named mbo. One of them is physically sitting at the box, while other three are SSHd into it.

CLI_post_4

Looking at the Configstore command above, since user sitting at the box doesn’t have SSH_TTY environment variable, she is assigned with the following config file:

Looking at the Configstore command above, since user sitting at the box doesn’t have SSH_TTY environment variable, she is assigned with the following config file:

.config/configstore/logsene/logsene-mbo.json

Second user’s SSH_TTY environment variable is set to /dev/ttys007 so he’s assigned with:

.config/configstore/logsene/dev/ttys007.json

Analogous to the second user, third and fourth users have SSH_TTY set to /dev/ttys008 and /dev/ttys009, respectively.

So this is how it works:

  1. user enters search command
  2. L-CLI tries to locate user’s configuration file, using the method I just described
  3. if the file is not found or if more than 30m have passed since user’s last command:
    1. ask the user to login and use that information to retrieve the API key
    2. ask the user to choose Logsene app that she wants to work with (if the API key has more than one Logsene app) and retrieve the chosen app’s APP key
    3. store username,  API key, APP key and session timestamp to user’s conf file
  4. update session timestamp
  5. display search results

OK, that solved the “session problem”.

Streams FTW
Ever since streams became widespread in Node, I’ve looked at them as something magical. It was unbelievable to see how a couple of pipe() functions can make the code terse and transform the way you think about your code. Streams are built on top of EventEmitter, which provides pub-sub primitives within an application. Piping is just a convenience mechanism that ties output of a readable stream (data source) to input of a writable stream (data destination) and handles all the nasty wiring and stream throttling bits for you (the latter is used to alleviate the problem that occurs when data rate from the source is faster than the processing rate of destination, so the source stream is asked to stop sending any more data until told otherwise by the destination stream).

HTTP requests (used to fetch results from the Logsene API server) are basically streams. stdout (think of it as console.log), where search results should eventually be written to, is a stream. So why would I interrupt those nice streaming interfaces with something ugly as promises in the middle? Moreover, it’s a much better experience if log entries start appearing almost immediately after you fire search, than to wait until the whole result set is buffered just to get a potentially huge output splat all over your terminal, all at once. I yelled “streams all the way down” and got to work.

Since I have never worked with streams for real before, I first needed to re-educate myself a bit. Substacks’s Stream handbook is still an excellent stream learning resource. But before starting with that, you should perhaps first go through NodeSchool Stream Adventure course.

Anyway, Logsene search API fully supports Elasticsearch client libraries, so to stream-enable the whole search experience I ended up using elasticsearch-streams package, which provides streaming facilities on top of the official elasticsearch library. You could say that the package is emulating streaming by offsetting the retrieved portion of the result set (e.g., 1-50 hits in first request, 51-100 in second, …). Unless you dug a bit deeper, you wouldn’t even notice the difference. By utilizing streams, Indexing becomes simply a writeable stream and searching, of course, a readable stream. Here I also used Dominick Tarr’s JSONStream package, that lets you manipulate streams of JSON data in the functional way. E.g. it allows you to map over JSON objects, to filter them on the fly, as they pass near you. And that’s exactly what I needed, since JSON is not a particularly useful for representing log entries to users. For that, I needed something more table-like, so I transformed JSON to TSV, following my colleague Stefan’s advice.

Right about this point, while working with streams, something clicked in my head. I cannot explain what exactly happened but somehow everything fell into place. I started feeling much more confident with Node. I mean, not that I was ever worried that I wouldn’t be able to do the job, but you know those moments when you think of a thing you still need to do (but you still don’t know how), then of another and in a matter of seconds your mental to-do list becomes so large that you cannot help feeling a bit overwhelmed. The experience had taught me that the best thing to do in those situations is to calm down, stabilize your breathing pattern and go back to your actual to-do list. Go over it and add any new things that you just thought of. Look at this as a way to better the product you’re working on, not as something that should be dwelled upon or be hindered by.

From this point on, it was a breeze. I did more in the following three days than the previous two weeks. Node has just spread out to me, ready to be gulped away. And that’s a great feeling!

OK, enough of the human story (that’s how my boss refers to my writing; either that or he just says essay instead of blog post), let’s see what this puppy can do…

-t flies

After two weeks of development I only had basic set of functionalities, where you could login and do a regular relevancy-based ES search.

MB2

At this point, my attention was drawn to the fact that normal, relevancy based search, when used by itself, doesn’t play well with log entries.

If you think about it, that actually makes sense. When a user wants to check her logs she is primarily interested in some specific time range (in fact, if a user just opens the Logsene web application, without entering a search query, she gets the last hour of logs). Only after log entries are filtered to include only some specific time range, user needs to be able to enter a query that is used against that time range. You would rarely (if ever) want to search the whole log history, but even then, you’re still speaking in time ranges.

So I needed to provide time range filtering functionality. I chose to use popular Moment.js library to help me with date-time, time duration and time range parsing.

Let me explain what I mean by date-time, duration and range:

  • date-time is simply a timestamp, e.g. -t 2016-06-24T18:42:36
  • duration is a length of time, represented with units of time: e.g. -t 1y8M8d8h8m8s
  • range is a provisional object that has start and end timestamps

Range is what we are really after. Every -t parameter must eventually yield a date-time range. The following table shows how that calculation is done, depending on the value of the -t parameter that the user has provided.

CLI_post5

Note that the default range separator is forward slash (standardized by ISO-8601). This can be customized with the –sep parameter.

Duration is basically a date-time modifier, which either operates on some specific date-time (i.e. timestamp) or on the current time.  E.g. -t 5 will subtract 5 minutes from the current time and return all entries since the resulting date-time. It would yield a range query, which would only have gte (greater than or equal) boundary.

Also note that, when you use duration in the second position in a range expression, it has to start with either minus or plus sign that immediately follows forward slash (like in the 3rd, 4th, 6th and 7th example in the table1). That tells L-CLI whether to add duration to the start or to subtract duration from the start.

Let Me Show You a Few Examples
When a user issues her first L-CLI command (or if more than 30 minutes have passed since her last command), she is asked to login. After a successful login, she is prompted to choose a Logsene application that she wants to work with (if she has more than one, of course).

After that, the command is executed.

Let’s say that a user installed L-CLI and she starts it for the first time, by issuing the simplest possible command, logsene search (without any parameters). L-CLI first asks the user to log in. Upon successful login, she is asked to choose the Logsene application that she wants to query. Now L-CLI has all the prerequisites to start doing it’s main job, searching logs. Since no parameters where given, L-CLI returns the last hour of log entries (default maximum number of returned log entries is 200, which can be adjusted with the -s parameter).

logsene search

CLI_post_6

When a live session exists, L-CLI already has all required user’s information, so there’s no need to login:

logsene search

CLI_post_7

To turn on tracing (used only for troubleshooting, since it messes up the TSV output):

logsene config set --trace [true]

CLI_post_8

Simple search with tracing turned on:

logsene search

CLI_post_9

Supply timestamp to start the search from (returns log entries in the range whose lower bound is the timestamp and upper bound is not specified):

logsene search -t 2015-06-25T23:06:32

CLI_post_10

Search for documents that contain both terms, response and took.

Default operator is OR, which can be overridden with either -op AND or  just –and:

logsene search response took -op AND

CLI_post_11

Search for documents that contain phrase ”triggered DYLD shared region” and were logged during the last 5 minutes.

logsene search “triggered DYLD shared region” -t 5m

CLI_post_12

Search for documents that were created between 9h ago and 8h 55m ago.

logsene search -t 9h/+5m

CLI_post_13

Search for documents that were created between 9h 5m ago and 9 h ago.

logsene search -t 9h/-5m

CLI_post_!4

Search for documents that contain either or both phrases; that were created between last Sunday at 06:00 and now (morning is translated to 06:00). Return up to 300 results, instead of default 200.

logsene search "signature that validated" "signature is valid" -t "last Sunday morning" -s 300

CLI_post_16

Here’s how the previous request looks like as it’s being shipped to Logsene (after being translated by the elastic.js helper library):

Code2

Usage: logsene search [query] [OPTIONS]

where OPTIONS may be:

-q <query>      Query string (-q parameter can be omitted)

-op AND         OPTIONAL Overrides default OR operator between multiple terms in a query

-t <interval>   OPTIONAL ISO 8601 datetime or duration or time range

-s <size>       OPTIONAL Number of matches to return. Defaults to 200

-o <offset>     OPTIONAL Number of matches to skip from the beginning. Defaults to 0

–json           OPTIONAL Returns JSON instead of TSV

–sep            OPTIONAL Sets the separator between two datetimes when specifying time range

Examples:

logsene search

returns last 1h of log entries

logsene search -q ERROR

returns last 1h of log entries that contain the term ERROR

logsene search UNDEFINED SEGFAULT

returns last 1h of log entries that have either of the terms

note: default operator is OR

logsene search SEGFAULT Segmentation -op AND

returns last 1h of log entries that have both terms

note: convenience parameter –and has the same effect

logsene search -q “Server not responding”

returns last 1h of log entries that contain the given phrase

logsene search “rare thing” -t 1y8M4d8h30m2s

returns all the log entries that contain the phrase “rare thing” reaching back to

1 year 8 months 4 days 8 hours 30 minutes and 2 seconds

note: when specifying duration, any datetime designator character can be omitted

(shown in the following two examples)

note: months must be specified with uppercase M (distinction from minutes)

note: minutes (m) are the default must be specified with uppercase M (distinction from minutes)

logsene search -t 1h30m

returns all the log entries from the last 1.5h

logsene search -t 90

equivalent to the previous example (default time unit is minute)

logsene search -t 2015-06-20T20:48

returns all the log entries that were logged after the provided datetime

note: allowed formats listed at the bottom of this help message

logsene search -t “2015-06-20 20:28″

returns all the log entries that were logged after the provided datetime

note: if a parameter contains spaces, it must be enclosed in quotes

logsene search -t 2015-06-16T22:27:41/2015-06-18T22:27:41

returns all the log entries that were logged between the two provided timestamps

note: date range must either contain forward slash between datetimes,

or a different range separator must be specified (shown in the next example)

logsene search -t “2015-06-16T22:27:41 TO 2015-06-18T22:27:41″ –sep ” TO “

same as previous command, except it sets the custom string separator that denotes a range

note: default separator is the forward slash (as per ISO-8601)

note: if a parameter contains spaces, it must be enclosed in quotes

logsene search -t “last Friday at 13/last Friday at 13:30″

it is also possible to use “human language” to designate datetime

note: it may be used only in place of datetime. Expressing range is not allowed

(e.g. “last friday between 12 and 14″ is not allowed)

note: may yield unpredictable datetime values

logsene search -q ERROR -s 20

returns at most 20 latest log entries (within the last hour) with the term ERROR

logsene search ERROR -s 50 -o 20

returns chronologically sorted hits 21st to 71st (offset=20)

note: default sort order is ascending (for convenience – latest on the bottom)

logsene search –help

outputs this usage information

Allowed datetime formats:

YYYY[-]MM[-]DD[T][HH[:MM[:SS]]]

e.g.

‘YYYY-MM-DD’

‘YYYY-MM-DD HH:mm’

‘YYYY-MM-DDTHH:mm’

‘YYYYMMDD’

‘YYYY-MM-DD HHmm’

‘YYYYMMDD HH:mm’

‘YYYYMMDD HHmm’

‘YYYYMMDDHHmm’

‘YYYYMMDDHH:mm’

‘YYYY-MM-DDTHHmm’

‘YYYYMMDDTHH:mm’

‘YYYYMMDDTHHmm’

‘YYYYMMDDTHH:mm’

‘YYYY-MM-DD HH:mm:ss’

‘YYYY-MM-DD HHmmss’

‘YYYY-MM-DDTHH:mm:ss’

‘YYYY-MM-DDTHHmmss’

‘YYYYMMDDHHmmss’

‘YYYYMMDDTHHmmss’

note: to use UTC instead of local time, append Z to datetime

note: all datetime components are optional except date (YYYY, MM and DD)

If not specified, component defaults to its lowest possible value

note: date part may be separated from time by T (ISO-8601), space or nothing at all

Allowed duration format:

[Ny][NM][Nd][Nh][Nm][Ns]

e.g.

1y2M8d22h8m48s

note: uppercase M must be used for months, lowercase m for minutes

note: if only a number is specified, it defaults to minutes

Allowed range formats

range can be expressed in all datetime/duration combinations:

datetime/datetime

datetime/{+|-}duration

duration/{+|-}duration

duration/datetime

where / is default range separator string and + or – sign is duration designator

The following table shows how ranges are calculated, given the different input parameters

CLI_post_17

note: all allowable date/time formats are also permitted when specifying ranges

note: disallowed range separators:

Y, y, M, D, d, H, h, m, S, s, -, +, P, p, T, t

Allowed “human” formats:

10 minutes ago

yesterday at 12:30pm

last night (night becomes 19:00)

last month

last friday at 2pm

3 hours ago

2 weeks ago at 17

wednesday 2 weeks ago

2 months ago

last week saturday morning (morning becomes 06:00)

note: “human” format can be used instead of date-time

note: it is not possible to express duration with “human” format (e.g., “from 2 to 3 this morning”)

note: it is recommended to avoid human format, as it may yield unexpected results

Usage: logsene config set [OPTIONS]

where OPTIONS may be:

–api-key <apiKey>

–app-key <appKey>

–range-separator <sep>

–trace <true|false>

It is not necessary to explicitly set api-key nor app-key.

Logsene CLI will ask you to log in and choose Logsene application

if keys are missing from the configuration

Examples:

logsene config set –api-key 11111111-1111-1111-1111-111111111111

sets the api key for the current session

logsene config set –app-key 22222222-2222-2222-2222-222222222222

sets Logsene application key for the current session

logsene config set –range-separator TO

sets default separator of two datetimes for time ranges (default is /, as per ISO6801)

logsene config set –trace [true]

activates tracing for the current session (true can be omitted)

logsene config set –trace false

deactivates tracing for the current session

Usage: logsene config get [OPTION]

Where OPTION may be:

–api-key

–app-key

–app-name

–range-separator (used to separate two datetimes when specifying time range)

–trace

–all (return listing of all params from the current user’s session)

Fork, yeah!
So, you see, there’s no need to use grep on your logs any more. Once your logs are shipped and securely stored in Logsene (which happens in real-time), simply use its little cousin, Logsene CLI. No configuration, no rummaging through servers manually, over SSH.

You can try Logsene CLI even if you don’t already have a Sematext account.

Getting a free, 30-day trial account is as simple as it gets! You’ll be set in less than 15 minutes to start playing with Logsene CLI. We won’t ask you for your credit card information (it’s not needed for trial account, so why would we).

Signup_graphic

The source code can be found on GitHub.

Please ping us back with your impressions, comments, suggestions, … anything really.   You can also reach us on Twitter @sematext.  And we would be exceptionally glad if you filed an issue or submitted a pull request on GitHub.

Filed under: Logging Tagged: logging, logs, logsene

Read the original blog entry...

More Stories By Sematext Blog

Sematext is a globally distributed organization that builds innovative Cloud and On Premises solutions for performance monitoring, alerting and anomaly detection (SPM), log management and analytics (Logsene), and search analytics (SSA). We also provide Search and Big Data consulting services and offer 24/7 production support for Solr and Elasticsearch.

@DevOpsSummit Stories
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the public cloud best suits your organization, and what the future holds for operations and infrastructure engineers in a post-container world. Is a serverless world inevitable?
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, which can process our conversational commands and orchestrate the outcomes we request across our personal and professional realm of connected devices.
"Infoblox does DNS, DHCP and IP address management for not only enterprise networks but cloud networks as well. Customers are looking for a single platform that can extend not only in their private enterprise environment but private cloud, public cloud, tracking all the IP space and everything that is going on in that environment," explained Steve Salo, Principal Systems Engineer at Infoblox, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buyers learn their thoughts on their experience.
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
"CA has been doing a lot of things in the area of DevOps. Now we have a complete set of tool sets in order to enable customers to go all the way from planning to development to testing down to release into the operations," explained Aruna Ravichandran, Vice President of Global Marketing and Strategy at CA Technologies, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
DevOps promotes continuous improvement through a culture of collaboration. But in real terms, how do you: Integrate activities across diverse teams and services? Make objective decisions with system-wide visibility? Use feedback loops to enable learning and improvement? With technology insights and real-world examples, in his general session at @DevOpsSummit, at 21st Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, explored how leading organizations use data-driven DevOps to close their feedback loops to drive continuous improvement.
SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implementation of electronic systems, in CAD / CAM deployment, and also is a designer and manufacturer of advanced 3D scanners for professional applications.
Sanjeev Sharma Joins June 5-7, 2018 @DevOpsSummit at @Cloud Expo New York Faculty. Sanjeev Sharma is an internationally known DevOps and Cloud Transformation thought leader, technology executive, and author. Sanjeev's industry experience includes tenures as CTO, Technical Sales leader, and Cloud Architect leader. As an IBM Distinguished Engineer, Sanjeev is recognized at the highest levels of IBM's core of technical leaders.
We all know that end users experience the Internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers' relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices – not doing so will be a path to eventual business failure.
"Cloud4U builds software services that help people build DevOps platforms for cloud-based software and using our platform people can draw a picture of the system, network, software," explained Kihyeon Kim, CEO and Head of R&D at Cloud4U, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory? In her Day 2 Keynote at @DevOpsSummit at 21st Cloud Expo, Aruna Ravichandran, VP, DevOps Solutions Marketing, CA Technologies, was joined by a panel of industry experts and real-world practitioners who shared their insight into an emerging set of best practices that lie at the heart of today's digital transformation.
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone innovative products that help customers get more from their computer applications, databases and infrastructure. To date, over one million users around the world have chosen Synametrics solutions to help power their accelerated business and personal computing needs.
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that's no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, explored how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He explained how automation, orchestration and governance are fundamental to managing today's hybrid cloud environments and are critical for digital businesses to deliver services faster, with better user experience and higher quality, all while saving money.
SYS-CON Events announced today that Google Cloud has been named “Keynote Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Companies come to Google Cloud to transform their businesses. Google Cloud’s comprehensive portfolio – from infrastructure to apps to devices – helps enterprises innovate faster, scale smarter, stay secure, and do more with data than ever before.
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
Continuous Delivery makes it possible to exploit findings of cognitive psychology and neuroscience to increase the productivity and happiness of our teams. In his session at 22nd Cloud Expo | DXWorld Expo, Daniel Jones, CTO of EngineerBetter, will answer: How can we improve willpower and decrease technical debt? Is the present bias real? How can we turn it to our advantage? Can you increase a team’s effective IQ? How do DevOps & Product Teams increase empathy, and what impact does empathy have on productivity?
There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scalability and development velocity challenges. In his session at 21st Cloud Expo, Ryland Degnan, a Senior Software Engineer on the Netflix Edge Platform team, will discuss how by leveraging a reactive stream-based protocol, we have been able to solve many of these problems at the communication layer. This makes it possible to create rich application experiences and support use-cases such as mobile-to-mobile communication and large file transfers that would be difficult or cost-prohibitive with traditional networking.
You know you need the cloud, but you're hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You're looking at private cloud solutions based on hyperconverged infrastructure, but you're concerned with the limits inherent in those technologies. What do you do?
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering management. To date, IBM has launched more than 50 cloud data centers that span the globe. He has been building advanced technology, delivering “as a service” solutions, and managing infrastructure services for the past 20 years.
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve full cloud literacy in the enterprise world.
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the benefits of the cloud without losing performance as containers become the new paradigm.