Both Timber and CloudWatch installed in project so I was able to view the logs on Timber even though CloudWatch was crashing due to the log formatter.
Getting this error in logs of Timber:
CloudWatch installed in Logger terminating ** (Protocol.UndefinedError) protocol Jason.Encoder not implemented for {{2020, 11, 13}, {14, 32, 36, 913}} of type Tuple, Jason.Encoder protocol must always be explicitly implemented. This protocol is implemented for the following type(s): CloudWatch.InputLogEvent, Ecto.Association.NotLoaded, Ecto.Schema.Metadata, DateTime, Integer, List, Float, Map, Time, Atom, Decimal, NaiveDateTime, Date, Any, BitString, Jason.Fragment (jason 1.2.2) lib/jason.ex:150: Jason.encode!/2 (ex_aws 2.1.6) lib/ex_aws/request.ex:19: ExAws.Request.request/6 (ex_aws 2.1.6) lib/ex_aws/operation/json.ex:49: ExAws.Operation.ExAws.Operation.JSON.perform/2 (cloud_watch 0.3.2) lib/cloud_watch/aws_proxy.ex:61: CloudWatch.AwsProxy.request/2 (cloud_watch 0.3.2) lib/cloud_watch.ex:121: CloudWatch.do_flush/4 (cloud_watch 0.3.2) lib/cloud_watch.ex:41: CloudWatch.handle_info/2 (stdlib 3.11) gen_event.erl:577: :gen_event.server_update/4 (stdlib 3.11) gen_event.erl:559: :gen_event.server_notify/4 (stdlib 3.11) gen_event.erl:347: :gen_event.handle_msg/6 (stdlib 3.11) proc_lib.erl:249: :proc_lib.init_p_do_apply/3 Last message: :flush State: %{buffer: [%CloudWatch.InputLogEvent{message: "[debug] QUERY OK source=\"event\" db=1.7ms\nSELECT max(e0.\"createTimeStamp\") FROM \"event\" AS e0 WHERE (e0.\"gameID\" = $1) [<<196, 234, 188, 198, 236, 184, 69, 45, 155, 81, 180, 126, 71, 252, 32, 236>>] \n", timestamp: {{2020, 11, 13}, {14, 32, 36, 913}}}], buffer_length: 1, buffer_size: 225, client: %{}, flushed_at: nil, format: {Statstrack.CloudWatchFormatter, :format}, level: :debug, log_group_name: "statstrack-web", log_stream_name: "statstrack-web-statstrack-prod-prod", max_buffer_size: 10485, max_timeout: 60000, sequence_token: nil}
The previous hurdle I managed to clear was CloudWatch crashing on CloudWatch.InputLogEvent in a similar fashion to how it is crashing here with the Erlang date type. I added a custom log formatter so it would catch this type so Jason wouldn't crash.
defmodule MyApp.CloudWatchFormatter do
require Protocol
Protocol.derive(Jason.Encoder, CloudWatch.InputLogEvent, only: [:message, :timestamp])
# If I could put in another derive here to catch the tuple that would be great but there's no way for derive to do that afaik
@pattern Logger.Formatter.compile("[$level] $message $metadata\n")
def format(level, message, timestamp, metadata) do
Logger.Formatter.format(@pattern, level, message, timestamp, [])
rescue
error -> "could not format: #{inspect(error)} #{inspect({level, message, metadata})}\n"
end
end
And then plugged that in to the cloud_watch config:
config :logger, CloudWatch,
format: {MyApp.CloudWatchFormatter, :format},
level: :debug,
log_group_name: "myapp-web",
log_stream_name: "myapp-logs",
max_buffer_size: 10_485,
max_timeout: 60_000,
metadata: []
This can't address the Erlang timestamp which looks like: {{2020, 11, 13}, {14, 32, 36, 913}} and I can't accommodate the tuple in the formatter the way I can with a module like CloudWatch.InputLogEvent.
I tried forking this lib and doing the formatter myself making it safe for Jason but I couldn't get it working so in the end I just switched ex_aws json parser back to Poison and now it's working... or possibly just on to the next iteration of problem.
It would be great if someone smarter than I could make this change so Jason can handle it.
Both Timber and CloudWatch installed in project so I was able to view the logs on Timber even though CloudWatch was crashing due to the log formatter.
Getting this error in logs of Timber:
CloudWatch installed in Logger terminating ** (Protocol.UndefinedError) protocol Jason.Encoder not implemented for {{2020, 11, 13}, {14, 32, 36, 913}} of type Tuple, Jason.Encoder protocol must always be explicitly implemented. This protocol is implemented for the following type(s): CloudWatch.InputLogEvent, Ecto.Association.NotLoaded, Ecto.Schema.Metadata, DateTime, Integer, List, Float, Map, Time, Atom, Decimal, NaiveDateTime, Date, Any, BitString, Jason.Fragment (jason 1.2.2) lib/jason.ex:150: Jason.encode!/2 (ex_aws 2.1.6) lib/ex_aws/request.ex:19: ExAws.Request.request/6 (ex_aws 2.1.6) lib/ex_aws/operation/json.ex:49: ExAws.Operation.ExAws.Operation.JSON.perform/2 (cloud_watch 0.3.2) lib/cloud_watch/aws_proxy.ex:61: CloudWatch.AwsProxy.request/2 (cloud_watch 0.3.2) lib/cloud_watch.ex:121: CloudWatch.do_flush/4 (cloud_watch 0.3.2) lib/cloud_watch.ex:41: CloudWatch.handle_info/2 (stdlib 3.11) gen_event.erl:577: :gen_event.server_update/4 (stdlib 3.11) gen_event.erl:559: :gen_event.server_notify/4 (stdlib 3.11) gen_event.erl:347: :gen_event.handle_msg/6 (stdlib 3.11) proc_lib.erl:249: :proc_lib.init_p_do_apply/3 Last message: :flush State: %{buffer: [%CloudWatch.InputLogEvent{message: "[debug] QUERY OK source=\"event\" db=1.7ms\nSELECT max(e0.\"createTimeStamp\") FROM \"event\" AS e0 WHERE (e0.\"gameID\" = $1) [<<196, 234, 188, 198, 236, 184, 69, 45, 155, 81, 180, 126, 71, 252, 32, 236>>] \n", timestamp: {{2020, 11, 13}, {14, 32, 36, 913}}}], buffer_length: 1, buffer_size: 225, client: %{}, flushed_at: nil, format: {Statstrack.CloudWatchFormatter, :format}, level: :debug, log_group_name: "statstrack-web", log_stream_name: "statstrack-web-statstrack-prod-prod", max_buffer_size: 10485, max_timeout: 60000, sequence_token: nil}The previous hurdle I managed to clear was CloudWatch crashing on
CloudWatch.InputLogEventin a similar fashion to how it is crashing here with the Erlang date type. I added a custom log formatter so it would catch this type so Jason wouldn't crash.And then plugged that in to the cloud_watch config:
This can't address the Erlang timestamp which looks like:
{{2020, 11, 13}, {14, 32, 36, 913}}and I can't accommodate the tuple in the formatter the way I can with a module likeCloudWatch.InputLogEvent.I tried forking this lib and doing the formatter myself making it safe for Jason but I couldn't get it working so in the end I just switched ex_aws json parser back to Poison and now it's working... or possibly just on to the next iteration of problem.
It would be great if someone smarter than I could make this change so Jason can handle it.